Sample records for model outputs include

  1. Phase 1 Free Air CO2 Enrichment Model-Data Synthesis (FACE-MDS): Model Output Data (2015)

    DOE Data Explorer

    Walker, A. P.; De Kauwe, M. G.; Medlyn, B. E.; Zaehle, S.; Asao, S.; Dietze, M.; El-Masri, B.; Hanson, P. J.; Hickler, T.; Jain, A.; Luo, Y.; Parton, W. J.; Prentice, I. C.; Ricciuto, D. M.; Thornton, P. E.; Wang, S.; Wang, Y -P; Warlind, D.; Weng, E.; Oren, R.; Norby, R. J.

    2015-01-01

    These datasets comprise the model output from phase 1 of the FACE-MDS. These include simulations of the Duke and Oak Ridge experiments and also idealised long-term (300 year) simulations at both sites (please see the modelling protocol for details). Included as part of this dataset are modelling and output protocols. The model datasets are formatted according to the output protocols. Phase 1 datasets are reproduced here for posterity and reproducibility although the model output for the experimental period have been somewhat superseded by the Phase 2 datasets.

  2. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  3. Updated Model of the Solar Energetic Proton Environment in Space

    NASA Astrophysics Data System (ADS)

    Jiggens, Piers; Heynderickx, Daniel; Sandberg, Ingmar; Truscott, Pete; Raukunen, Osku; Vainio, Rami

    2018-05-01

    The Solar Accumulated and Peak Proton and Heavy Ion Radiation Environment (SAPPHIRE) model provides environment specification outputs for all aspects of the Solar Energetic Particle (SEP) environment. The model is based upon a thoroughly cleaned and carefully processed data set. Herein the evolution of the solar proton model is discussed with comparisons to other models and data. This paper discusses the construction of the underlying data set, the modelling methodology, optimisation of fitted flux distributions and extrapolation of model outputs to cover a range of proton energies from 0.1 MeV to 1 GeV. The model provides outputs in terms of mission cumulative fluence, maximum event fluence and peak flux for both solar maximum and solar minimum periods. A new method for describing maximum event fluence and peak flux outputs in terms of 1-in-x-year SPEs is also described. SAPPHIRE proton model outputs are compared with previous models including CREME96, ESP-PSYCHIC and the JPL model. Low energy outputs are compared to SEP data from ACE/EPAM whilst high energy outputs are compared to a new model based on GLEs detected by Neutron Monitors (NMs).

  4. H∞ output tracking control of discrete-time nonlinear systems via standard neural network models.

    PubMed

    Liu, Meiqin; Zhang, Senlin; Chen, Haiyang; Sheng, Weihua

    2014-10-01

    This brief proposes an output tracking control for a class of discrete-time nonlinear systems with disturbances. A standard neural network model is used to represent discrete-time nonlinear systems whose nonlinearity satisfies the sector conditions. H∞ control performance for the closed-loop system including the standard neural network model, the reference model, and state feedback controller is analyzed using Lyapunov-Krasovskii stability theorem and linear matrix inequality (LMI) approach. The H∞ controller, of which the parameters are obtained by solving LMIs, guarantees that the output of the closed-loop system closely tracks the output of a given reference model well, and reduces the influence of disturbances on the tracking error. Three numerical examples are provided to show the effectiveness of the proposed H∞ output tracking design approach.

  5. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  6. Alpha 2 LASSO Data Bundles

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi

    2015-08-31

    The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  7. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2015-12-01

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approach is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers' efficiency.

  8. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs.

    PubMed

    Vitolo, Claudia; Di Giuseppe, Francesca; D'Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package.

  9. Generative electronic background music system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mazurowski, Lukasz

    In this short paper-extended abstract the new approach to generation of electronic background music has been presented. The Generative Electronic Background Music System (GEBMS) has been located between other related approaches within the musical algorithm positioning framework proposed by Woller et al. The music composition process is performed by a number of mini-models parameterized by further described properties. The mini-models generate fragments of musical patterns used in output composition. Musical pattern and output generation are controlled by container for the mini-models - a host-model. General mechanism has been presented including the example of the synthesized output compositions.

  10. Caliver: An R package for CALIbration and VERification of forest fire gridded model outputs

    PubMed Central

    Di Giuseppe, Francesca; D’Andrea, Mirko

    2018-01-01

    The name caliver stands for CALIbration and VERification of forest fire gridded model outputs. This is a package developed for the R programming language and available under an APACHE-2 license from a public repository. In this paper we describe the functionalities of the package and give examples using publicly available datasets. Fire danger model outputs are taken from the modeling components of the European Forest Fire Information System (EFFIS) and observed burned areas from the Global Fire Emission Database (GFED). Complete documentation, including a vignette, is also available within the package. PMID:29293536

  11. Enhanced DEA model with undesirable output and interval data for rice growing farmers performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Khan, Sahubar Ali Mohd. Nadhar, E-mail: sahubar@uum.edu.my; Ramli, Razamin, E-mail: razamin@uum.edu.my; Baten, M. D. Azizul, E-mail: baten-math@yahoo.com

    Agricultural production process typically produces two types of outputs which are economic desirable as well as environmentally undesirable outputs (such as greenhouse gas emission, nitrate leaching, effects to human and organisms and water pollution). In efficiency analysis, this undesirable outputs cannot be ignored and need to be included in order to obtain the actual estimation of firms efficiency. Additionally, climatic factors as well as data uncertainty can significantly affect the efficiency analysis. There are a number of approaches that has been proposed in DEA literature to account for undesirable outputs. Many researchers has pointed that directional distance function (DDF) approachmore » is the best as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, it has been found that interval data approach is the most suitable to account for data uncertainty as it is much simpler to model and need less information regarding its distribution and membership function. In this paper, an enhanced DEA model based on DDF approach that considers undesirable outputs as well as climatic factors and interval data is proposed. This model will be used to determine the efficiency of rice farmers who produces undesirable outputs and operates under uncertainty. It is hoped that the proposed model will provide a better estimate of rice farmers’ efficiency.« less

  12. Model-free adaptive control of supercritical circulating fluidized-bed boilers

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L

    2014-12-16

    A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  13. The Effect of Nondeterministic Parameters on Shock-Associated Noise Prediction Modeling

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.; Khavaran, Abbas

    2010-01-01

    Engineering applications for aircraft noise prediction contain models for physical phenomenon that enable solutions to be computed quickly. These models contain parameters that have an uncertainty not accounted for in the solution. To include uncertainty in the solution, nondeterministic computational methods are applied. Using prediction models for supersonic jet broadband shock-associated noise, fixed model parameters are replaced by probability distributions to illustrate one of these methods. The results show the impact of using nondeterministic parameters both on estimating the model output uncertainty and on the model spectral level prediction. In addition, a global sensitivity analysis is used to determine the influence of the model parameters on the output, and to identify the parameters with the least influence on model output.

  14. Control and optimization system

    DOEpatents

    Xinsheng, Lou

    2013-02-12

    A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  15. Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  16. LANL - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  17. LANL - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  18. HAWQS Beta Flyer

    EPA Pesticide Factsheets

    HAWQS is a web-based interactive water quantity and quality modeling system that provides users with interactive web interfaces and maps; pre-loaded input data; outputs that include tables, charts, graphs and raw output data; and a user guide.

  19. HAWQS beta flyer

    EPA Pesticide Factsheets

    HAWQS is a web-based interactive water quantity and quality modeling system that provides users with interactive web interfaces and maps; pre-loaded input data; outputs that include tables, charts, graphs and raw output data; and a user guide.

  20. User's Guide for Monthly Vector Wind Profile Model

    NASA Technical Reports Server (NTRS)

    Adelfang, S. I.

    1999-01-01

    The background, theoretical concepts, and methodology for construction of vector wind profiles based on a statistical model are presented. The derived monthly vector wind profiles are to be applied by the launch vehicle design community for establishing realistic estimates of critical vehicle design parameter dispersions related to wind profile dispersions. During initial studies a number of months are used to establish the model profiles that produce the largest monthly dispersions of ascent vehicle aerodynamic load indicators. The largest monthly dispersions for wind, which occur during the winter high-wind months, are used for establishing the design reference dispersions for the aerodynamic load indicators. This document includes a description of the computational process for the vector wind model including specification of input data, parameter settings, and output data formats. Sample output data listings are provided to aid the user in the verification of test output.

  1. NREL - SOWFA - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  2. PNNL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  3. ANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  4. LLNL - WRF-LES - Neutral - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  5. ANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  6. LANL - WRF-LES - Neutral - TTU

    DOE Data Explorer

    Kosovic, Branko

    2018-06-20

    This dataset includes large-eddy simulation (LES) output from a neutrally stratified atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on Aug. 17, 2012. The dataset was used to assess LES models for simulation of canonical neutral ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  7. LANL - WRF-LES - Convective - TTU

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kosovic, Branko

    This dataset includes large-eddy simulation (LES) output from a convective atmospheric boundary layer (ABL) simulation of observations at the SWIFT tower near Lubbock, Texas on July 4, 2012. The dataset was used to assess the LES models for simulation of canonical convective ABL. The dataset can be used for comparison with other LES and computational fluid dynamics model outputs.

  8. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  9. Analysis of model output and science data in the Virtual Model Repository (VMR).

    NASA Astrophysics Data System (ADS)

    De Zeeuw, D.; Ridley, A. J.

    2014-12-01

    Big scientific data not only includes large repositories of data from scientific platforms like satelites and ground observation, but also the vast output of numerical models. The Virtual Model Repository (VMR) provides scientific analysis and visualization tools for a many numerical models of the Earth-Sun system. Individual runs can be analyzed in the VMR and compared to relevant data through relevant metadata, but larger collections of runs can also now be studied and statistics generated on the accuracy and tendancies of model output. The vast model repository at the CCMC with over 1000 simulations of the Earth's magnetosphere was used to look at overall trends in accuracy when compared to satelites such as GOES, Geotail, and Cluster. Methodology for this analysis as well as case studies will be presented.

  10. Alternative to Ritt's pseudodivision for finding the input-output equations of multi-output models.

    PubMed

    Meshkat, Nicolette; Anderson, Chris; DiStefano, Joseph J

    2012-09-01

    Differential algebra approaches to structural identifiability analysis of a dynamic system model in many instances heavily depend upon Ritt's pseudodivision at an early step in analysis. The pseudodivision algorithm is used to find the characteristic set, of which a subset, the input-output equations, is used for identifiability analysis. A simpler algorithm is proposed for this step, using Gröbner Bases, along with a proof of the method that includes a reduced upper bound on derivative requirements. Efficacy of the new algorithm is illustrated with several biosystem model examples. Copyright © 2012 Elsevier Inc. All rights reserved.

  11. Neural node network and model, and method of teaching same

    DOEpatents

    Parlos, A.G.; Atiya, A.F.; Fernandez, B.; Tsai, W.K.; Chong, K.T.

    1995-12-26

    The present invention is a fully connected feed forward network that includes at least one hidden layer. The hidden layer includes nodes in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device occurring in the feedback path (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit from all the other nodes within the same layer. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing. 21 figs.

  12. Neural node network and model, and method of teaching same

    DOEpatents

    Parlos, Alexander G.; Atiya, Amir F.; Fernandez, Benito; Tsai, Wei K.; Chong, Kil T.

    1995-01-01

    The present invention is a fully connected feed forward network that includes at least one hidden layer 16. The hidden layer 16 includes nodes 20 in which the output of the node is fed back to that node as an input with a unit delay produced by a delay device 24 occurring in the feedback path 22 (local feedback). Each node within each layer also receives a delayed output (crosstalk) produced by a delay unit 36 from all the other nodes within the same layer 16. The node performs a transfer function operation based on the inputs from the previous layer and the delayed outputs. The network can be implemented as analog or digital or within a general purpose processor. Two teaching methods can be used: (1) back propagation of weight calculation that includes the local feedback and the crosstalk or (2) more preferably a feed forward gradient decent which immediately follows the output computations and which also includes the local feedback and the crosstalk. Subsequent to the gradient propagation, the weights can be normalized, thereby preventing convergence to a local optimum. Education of the network can be incremental both on and off-line. An educated network is suitable for modeling and controlling dynamic nonlinear systems and time series systems and predicting the outputs as well as hidden states and parameters. The educated network can also be further educated during on-line processing.

  13. Advances in a distributed approach for ocean model data interoperability

    USGS Publications Warehouse

    Signell, Richard P.; Snowden, Derrick P.

    2014-01-01

    An infrastructure for earth science data is emerging across the globe based on common data models and web services. As we evolve from custom file formats and web sites to standards-based web services and tools, data is becoming easier to distribute, find and retrieve, leaving more time for science. We describe recent advances that make it easier for ocean model providers to share their data, and for users to search, access, analyze and visualize ocean data using MATLAB® and Python®. These include a technique for modelers to create aggregated, Climate and Forecast (CF) metadata convention datasets from collections of non-standard Network Common Data Form (NetCDF) output files, the capability to remotely access data from CF-1.6-compliant NetCDF files using the Open Geospatial Consortium (OGC) Sensor Observation Service (SOS), a metadata standard for unstructured grid model output (UGRID), and tools that utilize both CF and UGRID standards to allow interoperable data search, browse and access. We use examples from the U.S. Integrated Ocean Observing System (IOOS®) Coastal and Ocean Modeling Testbed, a project in which modelers using both structured and unstructured grid model output needed to share their results, to compare their results with other models, and to compare models with observed data. The same techniques used here for ocean modeling output can be applied to atmospheric and climate model output, remote sensing data, digital terrain and bathymetric data.

  14. Control vocabulary software designed for CMIP6

    NASA Astrophysics Data System (ADS)

    Nadeau, D.; Taylor, K. E.; Williams, D. N.; Ames, S.

    2016-12-01

    The Coupled Model Intercomparison Project Phase 6 (CMIP6) coordinates a number of intercomparison activities and includes many more experiments than its predecessor, CMIP5. In order to organize and facilitate use of the complex collection of expected CMIP6 model output, a standard set of descriptive information has been defined, which must be stored along with the data. This standard information enables automated machine interpretation of the contents of all model output files. The standard metadata is stored in compliance with the Climate and Forecast (CF) standard, which ensures that it can be interpreted and visualized by many standard software packages. Additional attributes (not standardized by CF) are required by CMIP6 to enhance identification of models and experiments, and to provide additional information critical for interpreting the model results. To ensure that CMIP6 data complies with the standards, a python program called "PrePARE" (Pre-Publication Attribute Reviewer for the ESGF) has been developed to check the model output prior to its publication and release for analysis. If, for example, a required attribute is missing or incorrect (e.g., not included in the reference CMIP6 controlled vocabularies), then PrePare will prevent publication. In some circumstances, missing attributes can be created or incorrect attributes can be replaced automatically by PrePARE, and the program will warn users about the changes that have been made. PrePARE provides a final check on model output assuring adherence to a baseline conformity across the output from all CMIP6 models which will facilitate analysis by climate scientists. PrePARE is flexible and can be easily modified for use by similar projects that have a well-defined set of metadata and controlled vocabularies.

  15. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction.

    PubMed

    Gao, Xiang-Ming; Yang, Shi-Feng; Pan, San-Bo

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization.

  16. Optimal Parameter Selection for Support Vector Machine Based on Artificial Bee Colony Algorithm: A Case Study of Grid-Connected PV System Power Prediction

    PubMed Central

    2017-01-01

    Predicting the output power of photovoltaic system with nonstationarity and randomness, an output power prediction model for grid-connected PV systems is proposed based on empirical mode decomposition (EMD) and support vector machine (SVM) optimized with an artificial bee colony (ABC) algorithm. First, according to the weather forecast data sets on the prediction date, the time series data of output power on a similar day with 15-minute intervals are built. Second, the time series data of the output power are decomposed into a series of components, including some intrinsic mode components IMFn and a trend component Res, at different scales using EMD. The corresponding SVM prediction model is established for each IMF component and trend component, and the SVM model parameters are optimized with the artificial bee colony algorithm. Finally, the prediction results of each model are reconstructed, and the predicted values of the output power of the grid-connected PV system can be obtained. The prediction model is tested with actual data, and the results show that the power prediction model based on the EMD and ABC-SVM has a faster calculation speed and higher prediction accuracy than do the single SVM prediction model and the EMD-SVM prediction model without optimization. PMID:28912803

  17. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  18. ThinTool: a spreadsheet model to evaluate fuel reduction thinning cost, net energy output, and nutrient impacts

    Treesearch

    Sang-Kyun Han; Han-Sup Han; William J. Elliot; Edward M. Bilek

    2017-01-01

    We developed a spreadsheet-based model, named ThinTool, to evaluate the cost of mechanical fuel reduction thinning including biomass removal, to predict net energy output, and to assess nutrient impacts from thinning treatments in northern California and southern Oregon. A combination of literature reviews, field-based studies, and contractor surveys was used to...

  19. Computer program for design analysis of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1976-01-01

    A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.

  20. Adaptation of time line analysis program to single pilot instrument flight research

    NASA Technical Reports Server (NTRS)

    Hinton, D. A.; Shaughnessy, J. D.

    1978-01-01

    A data base was developed for SPIFR operation and the program was run. The outputs indicated that further work was necessary on the workload models. In particular, the workload model for the cognitive channel should be modified as the output workload appears to be too small. Included in the needed refinements are models to show the workload when in turbulence, when overshooting a radial or glideslope, and when copying air traffic control clearances.

  1. Enhancement and identification of dust events in the south-west region of Iran using satellite observations

    NASA Astrophysics Data System (ADS)

    Taghavi, F.; Owlad, E.; Ackerman, S. A.

    2017-03-01

    South-west Asia including the Middle East is one of the most prone regions to dust storm events. In recent years, there was an increase in the occurrence of these environmental and meteorological phenomena. Remote sensing could serve as an applicable method to detect and also characterise these events. In this study, two dust enhancement algorithms were used to investigate the behaviour of dust events using satellite data, compare with numerical model output and other satellite products and finally validate with in-situ measurements. The results show that the use of thermal infrared algorithm enhances dust more accurately. The aerosol optical depth from MODIS and output of a Dust Regional Atmospheric Model (DREAM8b) are applied for comparing the results. Ground-based observations of synoptic stations and sun photometers are used for validating the satellite products. To find the transport direction and the locations of the dust sources and the synoptic situations during these events, model outputs (HYSPLIT and NCEP/NCAR) are presented. Comparing the results with synoptic maps and the model outputs showed that using enhancement algorithms is a more reliable way than any other MODIS products or model outputs to enhance the dust.

  2. Data-Based Predictive Control with Multirate Prediction Step

    NASA Technical Reports Server (NTRS)

    Barlow, Jonathan S.

    2010-01-01

    Data-based predictive control is an emerging control method that stems from Model Predictive Control (MPC). MPC computes current control action based on a prediction of the system output a number of time steps into the future and is generally derived from a known model of the system. Data-based predictive control has the advantage of deriving predictive models and controller gains from input-output data. Thus, a controller can be designed from the outputs of complex simulation code or a physical system where no explicit model exists. If the output data happens to be corrupted by periodic disturbances, the designed controller will also have the built-in ability to reject these disturbances without the need to know them. When data-based predictive control is implemented online, it becomes a version of adaptive control. One challenge of MPC is computational requirements increasing with prediction horizon length. This paper develops a closed-loop dynamic output feedback controller that minimizes a multi-step-ahead receding-horizon cost function with multirate prediction step. One result is a reduced influence of prediction horizon and the number of system outputs on the computational requirements of the controller. Another result is an emphasis on portions of the prediction window that are sampled more frequently. A third result is the ability to include more outputs in the feedback path than in the cost function.

  3. Modeling Operations Other Than War: Non-Combatants in Combat Modeling

    DTIC Science & Technology

    1994-09-01

    supposition that non-combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . The model...combatants are an essential feature in OOTW. The model proposal includes a methodology for civilian unit decision making . Thi- model also includes...numerical example demonstrated that the model appeared to perform in an acceptable manner, in that it produced output within a reasonable range. During the

  4. The SLH framework for modeling quantum input-output networks

    DOE PAGES

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    2017-09-04

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  5. The SLH framework for modeling quantum input-output networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combes, Joshua; Kerckhoff, Joseph; Sarovar, Mohan

    Here, many emerging quantum technologies demand precise engineering and control over networks consisting of quantum mechanical degrees of freedom connected by propagating electromagnetic fields, or quantum input-output networks. Here we review recent progress in theory and experiment related to such quantum input-output networks, with a focus on the SLH framework, a powerful modeling framework for networked quantum systems that is naturally endowed with properties such as modularity and hierarchy. We begin by explaining the physical approximations required to represent any individual node of a network, e.g. atoms in cavity or a mechanical oscillator, and its coupling to quantum fields bymore » an operator triple ( S,L,H). Then we explain how these nodes can be composed into a network with arbitrary connectivity, including coherent feedback channels, using algebraic rules, and how to derive the dynamics of network components and output fields. The second part of the review discusses several extensions to the basic SLH framework that expand its modeling capabilities, and the prospects for modeling integrated implementations of quantum input-output networks. In addition to summarizing major results and recent literature, we discuss the potential applications and limitations of the SLH framework and quantum input-output networks, with the intention of providing context to a reader unfamiliar with the field.« less

  6. Quantitative Decision Support Requires Quantitative User Guidance

    NASA Astrophysics Data System (ADS)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output for a given problem is presented. Based on climate science, meteorology, and the details of the question in hand, this approach identifies necessary (never sufficient) conditions required for the rational use of climate model output in quantitative decision support tools. Inasmuch as climate forecasting is a problem of extrapolation, there will always be harsh limits on our ability to establish where a model is fit for purpose, this does not, however, limit us from identifying model noise as such, and thereby avoiding some cases of the misapplication and over interpretation of model output. It is suggested that failure to clearly communicate the limits of today’s climate model in providing quantitative decision relevant climate information to today’s users of climate information, would risk the credibility of tomorrow’s climate science and science based policy more generally.

  7. NEMS Freight Transportation Module Improvement Study

    EIA Publications

    2015-01-01

    The U.S. Energy Information Administration (EIA) contracted with IHS Global, Inc. (IHS) to analyze the relationship between the value of industrial output, physical output, and freight movement in the United States for use in updating analytic assumptions and modeling structure within the National Energy Modeling System (NEMS) freight transportation module, including forecasting methodologies and processes to identify possible alternative approaches that would improve multi-modal freight flow and fuel consumption estimation.

  8. Improving Snow Modeling by Assimilating Observational Data Collected by Citizen Scientists

    NASA Astrophysics Data System (ADS)

    Crumley, R. L.; Hill, D. F.; Arendt, A. A.; Wikstrom Jones, K.; Wolken, G. J.; Setiawan, L.

    2017-12-01

    Modeling seasonal snow pack in alpine environments includes a multiplicity of challenges caused by a lack of spatially extensive and temporally continuous observational datasets. This is partially due to the difficulty of collecting measurements in harsh, remote environments where extreme gradients in topography exist, accompanied by large model domains and inclement weather. Engaging snow enthusiasts, snow professionals, and community members to participate in the process of data collection may address some of these challenges. In this study, we use SnowModel to estimate seasonal snow water equivalence (SWE) in the Thompson Pass region of Alaska while incorporating snow depth measurements collected by citizen scientists. We develop a modeling approach to assimilate hundreds of snow depth measurements from participants in the Community Snow Observations (CSO) project (www.communitysnowobs.org). The CSO project includes a mobile application where participants record and submit geo-located snow depth measurements while working and recreating in the study area. These snow depth measurements are randomly located within the model grid at irregular time intervals over the span of four months in the 2017 water year. This snow depth observation dataset is converted into a SWE dataset by employing an empirically-based, bulk density and SWE estimation method. We then assimilate this data using SnowAssim, a sub-model within SnowModel, to constrain the SWE output by the observed data. Multiple model runs are designed to represent an array of output scenarios during the assimilation process. An effort to present model output uncertainties is included, as well as quantification of the pre- and post-assimilation divergence in modeled SWE. Early results reveal pre-assimilation SWE estimations are consistently greater than the post-assimilation estimations, and the magnitude of divergence increases throughout the snow pack evolution period. This research has implications beyond the Alaskan context because it increases our ability to constrain snow modeling outputs by making use of snow measurements collected by non-expert, citizen scientists.

  9. COSP for Windows: Strategies for Rapid Analyses of Cyclic Oxidation Behavior

    NASA Technical Reports Server (NTRS)

    Smialek, James L.; Auping, Judith V.

    2002-01-01

    COSP is a publicly available computer program that models the cyclic oxidation weight gain and spallation process. Inputs to the model include the selection of an oxidation growth law and a spalling geometry, plus oxide phase, growth rate, spall constant, and cycle duration parameters. Output includes weight change, the amounts of retained and spalled oxide, the total oxygen and metal consumed, and the terminal rates of weight loss and metal consumption. The present version is Windows based and can accordingly be operated conveniently while other applications remain open for importing experimental weight change data, storing model output data, or plotting model curves. Point-and-click operating features include multiple drop-down menus for input parameters, data importing, and quick, on-screen plots showing one selection of the six output parameters for up to 10 models. A run summary text lists various characteristic parameters that are helpful in describing cyclic behavior, such as the maximum weight change, the number of cycles to reach the maximum weight gain or zero weight change, the ratio of these, and the final rate of weight loss. The program includes save and print options as well as a help file. Families of model curves readily show the sensitivity to various input parameters. The cyclic behaviors of nickel aluminide (NiAl) and a complex superalloy are shown to be properly fitted by model curves. However, caution is always advised regarding the uniqueness claimed for any specific set of input parameters,

  10. The relationship between buccofacial and limb apraxia.

    PubMed

    Raade, A S; Rothi, L J; Heilman, K M

    1991-07-01

    There are at least two possible models depicting the relationship between buccofacial and limb apraxia. First, apraxia can be viewed as a unitary motor disorder which transcends the output modalities of both buccofacial and limb output. A high degree of similarity between the two types of apraxia would support this model. Alternatively, the relationship between buccofacial and limb apraxia may not include a unitary mechanism. The presence of quantitative and qualitative differences between buccofacial and limb performance would support this nonunitary model. The results of the present study support the nonunitary model.

  11. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  12. Nonlinear Modeling of Causal Interrelationships in Neuronal Ensembles

    PubMed Central

    Zanos, Theodoros P.; Courellis, Spiros H.; Berger, Theodore W.; Hampson, Robert E.; Deadwyler, Sam A.; Marmarelis, Vasilis Z.

    2009-01-01

    The increasing availability of multiunit recordings gives new urgency to the need for effective analysis of “multidimensional” time-series data that are derived from the recorded activity of neuronal ensembles in the form of multiple sequences of action potentials—treated mathematically as point-processes and computationally as spike-trains. Whether in conditions of spontaneous activity or under conditions of external stimulation, the objective is the identification and quantification of possible causal links among the neurons generating the observed binary signals. A multiple-input/multiple-output (MIMO) modeling methodology is presented that can be used to quantify the neuronal dynamics of causal interrelationships in neuronal ensembles using spike-train data recorded from individual neurons. These causal interrelationships are modeled as transformations of spike-trains recorded from a set of neurons designated as the “inputs” into spike-trains recorded from another set of neurons designated as the “outputs.” The MIMO model is composed of a set of multiinput/single-output (MISO) modules, one for each output. Each module is the cascade of a MISO Volterra model and a threshold operator generating the output spikes. The Laguerre expansion approach is used to estimate the Volterra kernels of each MISO module from the respective input–output data using the least-squares method. The predictive performance of the model is evaluated with the use of the receiver operating characteristic (ROC) curve, from which the optimum threshold is also selected. The Mann–Whitney statistic is used to select the significant inputs for each output by examining the statistical significance of improvements in the predictive accuracy of the model when the respective inputs is included. Illustrative examples are presented for a simulated system and for an actual application using multiunit data recordings from the hippocampus of a behaving rat. PMID:18701382

  13. Grid Integrated Distributed PV (GridPV) Version 2.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reno, Matthew J.; Coogan, Kyle

    2014-12-01

    This manual provides the documentation of the MATLAB toolbox of functions for using OpenDSS to simulate the impact of solar energy on the distribution system. The majority of the functio ns are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in th e OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included tomore » show potential uses of the toolbox functions. Each function i n the toolbox is documented with the function use syntax, full description, function input list, function output list, example use, and example output.« less

  14. NONMEMory: a run management tool for NONMEM.

    PubMed

    Wilkins, Justin J

    2005-06-01

    NONMEM is an extremely powerful tool for nonlinear mixed-effect modelling and simulation of pharmacokinetic and pharmacodynamic data. However, it is a console-based application whose output does not lend itself to rapid interpretation or efficient management. NONMEMory has been created to be a comprehensive project manager for NONMEM, providing detailed summary, comparison and overview of the runs comprising a given project, including the display of output data, simple post-run processing, fast diagnostic plots and run output management, complementary to other available modelling aids. Analysis time ought not to be spent on trivial tasks, and NONMEMory's role is to eliminate these as far as possible by increasing the efficiency of the modelling process. NONMEMory is freely available from http://www.uct.ac.za/depts/pha/nonmemory.php.

  15. Theoretical studies of solar lasers and converters

    NASA Technical Reports Server (NTRS)

    Heinbockel, John H.

    1990-01-01

    The research described consisted of developing and refining the continuous flow laser model program including the creation of a working model. The mathematical development of a two pass amplifier for an iodine laser is summarized. A computer program for the amplifier's simulation is included with output from the simulation model.

  16. Quantification of downscaled precipitation uncertainties via Bayesian inference

    NASA Astrophysics Data System (ADS)

    Nury, A. H.; Sharma, A.; Marshall, L. A.

    2017-12-01

    Prediction of precipitation from global climate model (GCM) outputs remains critical to decision-making in water-stressed regions. In this regard, downscaling of GCM output has been a useful tool for analysing future hydro-climatological states. Several downscaling approaches have been developed for precipitation downscaling, including those using dynamical or statistical downscaling methods. Frequently, outputs from dynamical downscaling are not readily transferable across regions for significant methodical and computational difficulties. Statistical downscaling approaches provide a flexible and efficient alternative, providing hydro-climatological outputs across multiple temporal and spatial scales in many locations. However these approaches are subject to significant uncertainty, arising due to uncertainty in the downscaled model parameters and in the use of different reanalysis products for inferring appropriate model parameters. Consequently, these will affect the performance of simulation in catchment scale. This study develops a Bayesian framework for modelling downscaled daily precipitation from GCM outputs. This study aims to introduce uncertainties in downscaling evaluating reanalysis datasets against observational rainfall data over Australia. In this research a consistent technique for quantifying downscaling uncertainties by means of Bayesian downscaling frame work has been proposed. The results suggest that there are differences in downscaled precipitation occurrences and extremes.

  17. A menu-driven software package of Bayesian nonparametric (and parametric) mixed models for regression analysis and density estimation.

    PubMed

    Karabatsos, George

    2017-02-01

    Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.

  18. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  19. A biocatalytic cascade with several output signals—towards biosensors with different levels of confidence

    PubMed Central

    Guz, Nataliia; Halámek, Jan; Rusling, James F.; Katz, Evgeny

    2014-01-01

    The biocatalytic cascade based on enzyme-catalyzed reactions activated by several biomolecular input signals and producing output signal after each reaction step was developed as an example of a logically reversible information processing system. The model system was designed to mimic the operation of concatenated AND logic gates with optically readable output signals generated at each step of the logic operation. Implications include concurrent bioanalyses and data interpretation for medical diagnostics. PMID:24748446

  20. An analytical framework to assist decision makers in the use of forest ecosystem model predictions

    USGS Publications Warehouse

    Larocque, Guy R.; Bhatti, Jagtar S.; Ascough, J.C.; Liu, J.; Luckai, N.; Mailly, D.; Archambault, L.; Gordon, Andrew M.

    2011-01-01

    The predictions from most forest ecosystem models originate from deterministic simulations. However, few evaluation exercises for model outputs are performed by either model developers or users. This issue has important consequences for decision makers using these models to develop natural resource management policies, as they cannot evaluate the extent to which predictions stemming from the simulation of alternative management scenarios may result in significant environmental or economic differences. Various numerical methods, such as sensitivity/uncertainty analyses, or bootstrap methods, may be used to evaluate models and the errors associated with their outputs. However, the application of each of these methods carries unique challenges which decision makers do not necessarily understand; guidance is required when interpreting the output generated from each model. This paper proposes a decision flow chart in the form of an analytical framework to help decision makers apply, in an orderly fashion, different steps involved in examining the model outputs. The analytical framework is discussed with regard to the definition of problems and objectives and includes the following topics: model selection, identification of alternatives, modelling tasks and selecting alternatives for developing policy or implementing management scenarios. Its application is illustrated using an on-going exercise in developing silvicultural guidelines for a forest management enterprise in Ontario, Canada.

  1. Balancing the stochastic description of uncertainties as a function of hydrologic model complexity

    NASA Astrophysics Data System (ADS)

    Del Giudice, D.; Reichert, P.; Albert, C.; Kalcic, M.; Logsdon Muenich, R.; Scavia, D.; Bosch, N. S.; Michalak, A. M.

    2016-12-01

    Uncertainty analysis is becoming an important component of forecasting water and pollutant fluxes in urban and rural environments. Properly accounting for errors in the modeling process can help to robustly assess the uncertainties associated with the inputs (e.g. precipitation) and outputs (e.g. runoff) of hydrological models. In recent years we have investigated several Bayesian methods to infer the parameters of a mechanistic hydrological model along with those of the stochastic error component. The latter describes the uncertainties of model outputs and possibly inputs. We have adapted our framework to a variety of applications, ranging from predicting floods in small stormwater systems to nutrient loads in large agricultural watersheds. Given practical constraints, we discuss how in general the number of quantities to infer probabilistically varies inversely with the complexity of the mechanistic model. Most often, when evaluating a hydrological model of intermediate complexity, we can infer the parameters of the model as well as of the output error model. Describing the output errors as a first order autoregressive process can realistically capture the "downstream" effect of inaccurate inputs and structure. With simpler runoff models we can additionally quantify input uncertainty by using a stochastic rainfall process. For complex hydrologic transport models, instead, we show that keeping model parameters fixed and just estimating time-dependent output uncertainties could be a viable option. The common goal across all these applications is to create time-dependent prediction intervals which are both reliable (cover the nominal amount of validation data) and precise (are as narrow as possible). In conclusion, we recommend focusing both on the choice of the hydrological model and of the probabilistic error description. The latter can include output uncertainty only, if the model is computationally-expensive, or, with simpler models, it can separately account for different sources of errors like in the inputs and the structure of the model.

  2. A comprehensive method for preliminary design optimization of axial gas turbine stages. II - Code verification

    NASA Technical Reports Server (NTRS)

    Jenkins, R. M.

    1983-01-01

    The present effort represents an extension of previous work wherein a calculation model for performing rapid pitchline optimization of axial gas turbine geometry, including blade profiles, is developed. The model requires no specification of geometric constraints. Output includes aerodynamic performance (adiabatic efficiency), hub-tip flow-path geometry, blade chords, and estimates of blade shape. Presented herein is a verification of the aerodynamic performance portion of the model, whereby detailed turbine test-rig data, including rig geometry, is input to the model to determine whether tested performance can be predicted. An array of seven (7) NASA single-stage axial gas turbine configurations is investigated, ranging in size from 0.6 kg/s to 63.8 kg/s mass flow and in specific work output from 153 J/g to 558 J/g at design (hot) conditions; stage loading factor ranges from 1.15 to 4.66.

  3. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overviewmore » of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.« less

  4. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  5. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  6. Rice growing farmers efficiency measurement using a slack based interval DEA model with undesirable outputs

    NASA Astrophysics Data System (ADS)

    Khan, Sahubar Ali Mohd. Nadhar; Ramli, Razamin; Baten, M. D. Azizul

    2017-11-01

    In recent years eco-efficiency which considers the effect of production process on environment in determining the efficiency of firms have gained traction and a lot of attention. Rice farming is one of such production processes which typically produces two types of outputs which are economic desirable as well as environmentally undesirable. In efficiency analysis, these undesirable outputs cannot be ignored and need to be included in the model to obtain the actual estimation of firm's efficiency. There are numerous approaches that have been used in data envelopment analysis (DEA) literature to account for undesirable outputs of which directional distance function (DDF) approach is the most widely used as it allows for simultaneous increase in desirable outputs and reduction of undesirable outputs. Additionally, slack based DDF DEA approaches considers the output shortfalls and input excess in determining efficiency. In situations when data uncertainty is present, the deterministic DEA model is not suitable to be used as the effects of uncertain data will not be considered. In this case, it has been found that interval data approach is suitable to account for data uncertainty as it is much simpler to model and need less information regarding the underlying data distribution and membership function. The proposed model uses an enhanced DEA model which is based on DDF approach and incorporates slack based measure to determine efficiency in the presence of undesirable factors and data uncertainty. Interval data approach was used to estimate the values of inputs, undesirable outputs and desirable outputs. Two separate slack based interval DEA models were constructed for optimistic and pessimistic scenarios. The developed model was used to determine rice farmers efficiency from Kepala Batas, Kedah. The obtained results were later compared to the results obtained using a deterministic DDF DEA model. The study found that 15 out of 30 farmers are efficient in all cases. It is also found that the average efficiency values of all farmers for deterministic case is always lower than the optimistic scenario and higher than pessimistic scenario. The results confirm with the hypothesis since farmers who operates in optimistic scenario are in best production situation compared to pessimistic scenario in which they operate in worst production situation. The results show that the proposed model can be applied when data uncertainty is present in the production environment.

  7. Enhancement of Local Climate Analysis Tool

    NASA Astrophysics Data System (ADS)

    Horsfall, F. M.; Timofeyeva, M. M.; Dutton, J.

    2012-12-01

    The National Oceanographic and Atmospheric Administration (NOAA) National Weather Service (NWS) will enhance its Local Climate Analysis Tool (LCAT) to incorporate specific capabilities to meet the needs of various users including energy, health, and other communities. LCAT is an online interactive tool that provides quick and easy access to climate data and allows users to conduct analyses at the local level such as time series analysis, trend analysis, compositing, correlation and regression techniques, with others to be incorporated as needed. LCAT uses principles of Artificial Intelligence in connecting human and computer perceptions on application of data and scientific techniques in multiprocessing simultaneous users' tasks. Future development includes expanding the type of data currently imported by LCAT (historical data at stations and climate divisions) to gridded reanalysis and General Circulation Model (GCM) data, which are available on global grids and thus will allow for climate studies to be conducted at international locations. We will describe ongoing activities to incorporate NOAA Climate Forecast System (CFS) reanalysis data (CFSR), NOAA model output data, including output from the National Multi Model Ensemble Prediction System (NMME) and longer term projection models, and plans to integrate LCAT into the Earth System Grid Federation (ESGF) and its protocols for accessing model output and observational data to ensure there is no redundancy in development of tools that facilitate scientific advancements and use of climate model information in applications. Validation and inter-comparison of forecast models will be included as part of the enhancement to LCAT. To ensure sustained development, we will investigate options for open sourcing LCAT development, in particular, through the University Corporation for Atmospheric Research (UCAR).

  8. Petri net modelling of buffers in automated manufacturing systems.

    PubMed

    Zhou, M; Dicesare, F

    1996-01-01

    This paper presents Petri net models of buffers and a methodology by which buffers can be included in a system without introducing deadlocks or overflows. The context is automated manufacturing. The buffers and models are classified as random order or order preserved (first-in-first-out or last-in-first-out), single-input-single-output or multiple-input-multiple-output, part type and/or space distinguishable or indistinguishable, and bounded or safe. Theoretical results for the development of Petri net models which include buffer modules are developed. This theory provides the conditions under which the system properties of boundedness, liveness, and reversibility are preserved. The results are illustrated through two manufacturing system examples: a multiple machine and multiple buffer production line and an automatic storage and retrieval system in the context of flexible manufacturing.

  9. Ecological Assimilation of Land and Climate Observations - the EALCO model

    NASA Astrophysics Data System (ADS)

    Wang, S.; Zhang, Y.; Trishchenko, A.

    2004-05-01

    Ecosystems are intrinsically dynamic and interact with climate at a highly integrated level. Climate variables are the main driving factors in controlling the ecosystem physical, physiological, and biogeochemical processes including energy balance, water balance, photosynthesis, respiration, and nutrient cycling. On the other hand, ecosystems function as an integrity and feedback on the climate system through their control on surface radiation balance, energy partitioning, and greenhouse gases exchange. To improve our capability in climate change impact assessment, a comprehensive ecosystem model is required to address the many interactions between climate change and ecosystems. In addition, different ecosystems can have very different responses to the climate change and its variation. To provide more scientific support for ecosystem impact assessment at national scale, it is imperative that ecosystem models have the capability of assimilating the large scale geospatial information including satellite observations, GIS datasets, and climate model outputs or reanalysis. The EALCO model (Ecological Assimilation of Land and Climate Observations) is developed for such purposes. EALCO includes the comprehensive interactions among ecosystem processes and climate, and assimilates a variety of remote sensing products and GIS database. It provides both national and local scale model outputs for ecosystem responses to climate change including radiation and energy balances, water conditions and hydrological cycles, carbon sequestration and greenhouse gas exchange, and nutrient (N) cycling. These results form the foundation for the assessment of climate change impact on ecosystems, their services, and adaptation options. In this poster, the main algorithms for the radiation, energy, water, carbon, and nitrogen simulations were diagrammed. Sample input data layers at Canada national scale were illustrated. Model outputs including the Canada wide spatial distributions of net radiation, evapotranspiration, gross primary production, net primary production, and net ecosystem production were discussed.

  10. MPS Solidification Model. Volume 2: Operating guide and software documentation for the unsteady model

    NASA Technical Reports Server (NTRS)

    Maples, A. L.

    1981-01-01

    The operation of solidification Model 2 is described and documentation of the software associated with the model is provided. Model 2 calculates the macrosegregation in a rectangular ingot of a binary alloy as a result of unsteady horizontal axisymmetric bidirectional solidification. The solidification program allows interactive modification of calculation parameters as well as selection of graphical and tabular output. In batch mode, parameter values are input in card image form and output consists of printed tables of solidification functions. The operational aspects of Model 2 that differ substantially from Model 1 are described. The global flow diagrams and data structures of Model 2 are included. The primary program documentation is the code itself.

  11. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  12. A model for a continuous-wave iodine laser

    NASA Technical Reports Server (NTRS)

    Hwang, In H.; Tabibi, Bagher M.

    1990-01-01

    A model for a continuous-wave (CW) iodine laser has been developed and compared with the experimental results obtained from a solar-simulator-pumped CW iodine laser. The agreement between the calculated laser power output and the experimental results is generally good for various laser parameters even when the model includes only prominent rate coefficients. The flow velocity dependence of the output power shows that the CW iodine laser cannot be achieved with a flow velocity below 1 m/s for the present solar-simulator-pumped CW iodine laser system.

  13. Study of electrode slice forming of bicycle dynamo hub power connector

    NASA Astrophysics Data System (ADS)

    Chen, Dyi-Cheng; Jao, Chih-Hsuan

    2013-12-01

    Taiwan's bicycle industry has been an international reputation as bicycle kingdom, but the problem in the world makes global warming green energy rise, the development of electrode slice of hub dynamo and power output connector to bring new hope to bike industry. In this study connector power output to gather public opinion related to patent, basis of collected documents as basis for design, structural components in least drawn to power output with simple connector. Power output of this study objectives connector hope at least cost, structure strongest, highest efficiency in output performance characteristics such as use of computer-aided drawing software Solid works to establish power output connector parts of 3D model, the overall portfolio should be considered part types including assembly ideas, weather resistance, water resistance, corrosion resistance to vibration and power flow stability. Moreover the 3D model import computer-aided finite element analysis software simulation of expected the power output of the connector parts manufacturing process. A series of simulation analyses, in which the variables relied on first stage and second stage forming, were run to examine the effective stress, effective strain, press speed, and die radial load distribution when forming electrode slice of bicycle dynamo hub.

  14. The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

    DOE PAGES

    Tsushima, Yoko; Brient, Florent; Klein, Stephen A.; ...

    2017-11-27

    The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less

  15. The Cloud Feedback Model Intercomparison Project (CFMIP) Diagnostic Codes Catalogue – metrics, diagnostics and methodologies to evaluate, understand and improve the representation of clouds and cloud feedbacks in climate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsushima, Yoko; Brient, Florent; Klein, Stephen A.

    The CFMIP Diagnostic Codes Catalogue assembles cloud metrics, diagnostics and methodologies, together with programs to diagnose them from general circulation model (GCM) outputs written by various members of the CFMIP community. This aims to facilitate use of the diagnostics by the wider community studying climate and climate change. Here, this paper describes the diagnostics and metrics which are currently in the catalogue, together with examples of their application to model evaluation studies and a summary of some of the insights these diagnostics have provided into the main shortcomings in current GCMs. Analysis of outputs from CFMIP and CMIP6 experiments willmore » also be facilitated by the sharing of diagnostic codes via this catalogue. Any code which implements diagnostics relevant to analysing clouds – including cloud–circulation interactions and the contribution of clouds to estimates of climate sensitivity in models – and which is documented in peer-reviewed studies, can be included in the catalogue. We very much welcome additional contributions to further support community analysis of CMIP6 outputs.« less

  16. Validity of cardiac output measurement by the thermodilution method in the presence of acute tricuspid regurgitation.

    PubMed

    Boerboom, L E; Kinney, T E; Olinger, G N; Hoffmann, R G

    1993-10-01

    Evaluation of patients with acute tricuspid insufficiency may include assessment of cardiac output by the thermodilution method. The accuracy of estimates of thermodilution-derived cardiac output in the presence of tricuspid insufficiency has been questioned. This study was designed to determine the validity of the thermodilution technique in a canine model of acute reversible tricuspid insufficiency. Cardiac output as measured by thermodilution and electromagnetic flowmeter was compared at two grades of regurgitation. The relationship between these two methods (thermodilution/electromagnetic) changed significantly from a regression slope of 1.01 +/- 0.18 (mean +/- standard deviation) during control conditions to a slope of 0.86 +/- 0.23 (p < 0.02) during severe regurgitation. No significant change was observed between control and mild regurgitation or between the initial control value and a control measurement repeated after tricuspid insufficiency was reversed at the termination of the study. This study shows that in a canine model of severe acute tricuspid regurgitation the thermodilution method underestimates cardiac output by an amount that is proportional to the level of cardiac output and to the grade of regurgitation.

  17. Boussinesq Modeling for Inlets, Harbors, and Structures (Bouss-2D)

    DTIC Science & Technology

    2015-10-30

    a wide variety of coastal and ocean engineering and naval architecture problems, including: transformation of waves over small to medium spatial...and outputs, and GIS data used in modeling. Recent applications include: Pillar Point Harbor, Oyster Point Marina, CA; Mouth of Columbia River

  18. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  19. Land surface Verification Toolkit (LVT)

    NASA Technical Reports Server (NTRS)

    Kumar, Sujay V.

    2017-01-01

    LVT is a framework developed to provide an automated, consolidated environment for systematic land surface model evaluation Includes support for a range of in-situ, remote-sensing and other model and reanalysis products. Supports the analysis of outputs from various LIS subsystems, including LIS-DA, LIS-OPT, LIS-UE. Note: The Land Information System Verification Toolkit (LVT) is a NASA software tool designed to enable the evaluation, analysis and comparison of outputs generated by the Land Information System (LIS). The LVT software is released under the terms and conditions of the NASA Open Source Agreement (NOSA) Version 1.1 or later. Land Information System Verification Toolkit (LVT) NOSA.

  20. An optimal control approach to the design of moving flight simulators

    NASA Technical Reports Server (NTRS)

    Sivan, R.; Ish-Shalom, J.; Huang, J.-K.

    1982-01-01

    An abstract flight simulator design problem is formulated in the form of an optimal control problem, which is solved for the linear-quadratic-Gaussian special case using a mathematical model of the vestibular organs. The optimization criterion used is the mean-square difference between the physiological outputs of the vestibular organs of the pilot in the aircraft and the pilot in the simulator. The dynamical equations are linearized, and the output signal is modeled as a random process with rational power spectral density. The method described yields the optimal structure of the simulator's motion generator, or 'washout filter'. A two-degree-of-freedom flight simulator design, including single output simulations, is presented.

  1. Documentation of model input and output values for simulation of pumping effects in Paradise Valley, a basin tributary to the Humboldt River, Humboldt County, Nevada

    USGS Publications Warehouse

    Carey, A.E.; Prudic, David E.

    1996-01-01

    Documentation is provided of model input and sample output used in a previous report for analysis of ground-water flow and simulated pumping scenarios in Paradise Valley, Humboldt County, Nevada.Documentation includes files containing input values and listings of sample output. The files, in American International Standard Code for Information Interchange (ASCII) or binary format, are compressed and put on a 3-1/2-inch diskette. The decompressed files require approximately 8.4 megabytes of disk space on an International Business Machine (IBM)- compatible microcomputer using the MicroSoft Disk Operating System (MS-DOS) operating system version 5.0 or greater.

  2. Integrated Geothermal-CO2 Storage Reservoirs: FY1 Final Report

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  4. Volterra-series-based nonlinear system modeling and its engineering applications: A state-of-the-art review

    NASA Astrophysics Data System (ADS)

    Cheng, C. M.; Peng, Z. K.; Zhang, W. M.; Meng, G.

    2017-03-01

    Nonlinear problems have drawn great interest and extensive attention from engineers, physicists and mathematicians and many other scientists because most real systems are inherently nonlinear in nature. To model and analyze nonlinear systems, many mathematical theories and methods have been developed, including Volterra series. In this paper, the basic definition of the Volterra series is recapitulated, together with some frequency domain concepts which are derived from the Volterra series, including the general frequency response function (GFRF), the nonlinear output frequency response function (NOFRF), output frequency response function (OFRF) and associated frequency response function (AFRF). The relationship between the Volterra series and other nonlinear system models and nonlinear problem solving methods are discussed, including the Taylor series, Wiener series, NARMAX model, Hammerstein model, Wiener model, Wiener-Hammerstein model, harmonic balance method, perturbation method and Adomian decomposition. The challenging problems and their state of arts in the series convergence study and the kernel identification study are comprehensively introduced. In addition, a detailed review is then given on the applications of Volterra series in mechanical engineering, aeroelasticity problem, control engineering, electronic and electrical engineering.

  5. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  6. THE USEPA'S METAL FINISHING FACILITY RISK SCREENING TOOL (MFFRST) AND POLLUTION PREVENTION TOOL (MFFP2T)

    EPA Science Inventory

    This presentation will provide an overview of the USEPA's Metal Finishing Facility Risk Screening Tool, including a discussion of the models used and outputs. The tool is currently being expanded to include pollution prevention considerations as part of the model. The current st...

  7. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  8. Reducing the uncertainty in the fidelity of seismic imaging results

    NASA Astrophysics Data System (ADS)

    Zhou, H. W.; Zou, Z.

    2017-12-01

    A key aspect in geoscientific inversion is quantifying the quality of the results. In seismic imaging, we must quantify the uncertainty of every imaging result based on field data, because data noise and methodology limitations may produce artifacts. Detection of artifacts is therefore an important aspect in uncertainty quantification in geoscientific inversion. Quantifying the uncertainty of seismic imaging solutions means assessing their fidelity, which defines the truthfulness of the imaged targets in terms of their resolution, position error and artifact. Key challenges to achieving the fidelity of seismic imaging include: (1) Difficulty to tell signal from artifact and noise; (2) Limitations in signal-to-noise ratio and seismic illumination; and (3) The multi-scale nature of the data space and model space. Most seismic imaging studies of the Earth's crust and mantle have employed inversion or modeling approaches. Though they are in opposite directions of mapping between the data space and model space, both inversion and modeling seek the best model to minimize the misfit in the data space, which unfortunately is not the output space. The fact that the selection and uncertainty of the output model are not judged in the output space has exacerbated the nonuniqueness problem for inversion and modeling. In contrast, the practice in exploration seismology has long established a two-fold approach of seismic imaging: Using velocity modeling building to establish the long-wavelength reference velocity models, and using seismic migration to map the short-wavelength reflectivity structures. Most interestingly, seismic migration maps the data into an output space called imaging space, where the output reflection images of the subsurface are formed based on an imaging condition. A good example is the reverse time migration, which seeks the reflectivity image as the best fit in the image space between the extrapolation of time-reversed waveform data and the prediction based on estimated velocity model and source parameters. I will illustrate the benefits of deciding the best output result in the output space for inversion, using examples from seismic imaging.

  9. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  10. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  11. Life cycle assessment modelling of waste-to-energy incineration in Spain and Portugal.

    PubMed

    Margallo, M; Aldaco, R; Irabien, A; Carrillo, V; Fischer, M; Bala, A; Fullana, P

    2014-06-01

    In recent years, waste management systems have been evaluated using a life cycle assessment (LCA) approach. A main shortcoming of prior studies was the focus on a mixture of waste with different characteristics. The estimation of emissions and consumptions associated with each waste fraction in these studies presented allocation problems. Waste-to-energy (WTE) incineration is a clear example in which municipal solid waste (MSW), comprising many types of materials, is processed to produce several outputs. This paper investigates an approach to better understand incineration processes in Spain and Portugal by applying a multi-input/output allocation model. The application of this model enabled predictions of WTE inputs and outputs, including the consumption of ancillary materials and combustibles, air emissions, solid wastes, and the energy produced during the combustion of each waste fraction. © The Author(s) 2014.

  12. The Flow Engine Framework: A Cognitive Model of Optimal Human Experience

    PubMed Central

    Šimleša, Milija; Guegan, Jérôme; Blanchard, Edouard; Tarpin-Bernard, Franck; Buisine, Stéphanie

    2018-01-01

    Flow is a well-known concept in the fields of positive and applied psychology. Examination of a large body of flow literature suggests there is a need for a conceptual model rooted in a cognitive approach to explain how this psychological phenomenon works. In this paper, we propose the Flow Engine Framework, a theoretical model explaining dynamic interactions between rearranged flow components and fundamental cognitive processes. Using an IPO framework (Inputs – Processes – Outputs) including a feedback process, we organize flow characteristics into three logically related categories: inputs (requirements for flow), mediating and moderating cognitive processes (attentional and motivational mechanisms) and outputs (subjective and objective outcomes), describing the process of the flow. Comparing flow with an engine, inputs are depicted as flow-fuel, core processes cylinder strokes and outputs as power created to provide motion. PMID:29899807

  13. Preliminary investigation of the effects of eruption source parameters on volcanic ash transport and dispersion modeling using HYSPLIT

    NASA Astrophysics Data System (ADS)

    Stunder, B.

    2009-12-01

    Atmospheric transport and dispersion (ATD) models are used in real-time at Volcanic Ash Advisory Centers to predict the location of airborne volcanic ash at a future time because of the hazardous nature of volcanic ash. Transport and dispersion models usually do not include eruption column physics, but start with an idealized eruption column. Eruption source parameters (ESP) input to the models typically include column top, eruption start time and duration, volcano latitude and longitude, ash particle size distribution, and total mass emission. An example based on the Okmok, Alaska, eruption of July 12-14, 2008, was used to qualitatively estimate the effect of various model inputs on transport and dispersion simulations using the NOAA HYSPLIT model. Variations included changing the ash column top and bottom, eruption start time and duration, particle size specifications, simulations with and without gravitational settling, and the effect of different meteorological model data. Graphical ATD model output of ash concentration from the various runs was qualitatively compared. Some parameters such as eruption duration and ash column depth had a large effect, while simulations using only small particles or changing the particle shape factor had much less of an effect. Some other variations such as using only large particles had a small effect for the first day or so after the eruption, then a larger effect on subsequent days. Example probabilistic output will be shown for an ensemble of dispersion model runs with various model inputs. Model output such as this may be useful as a means to account for some of the uncertainties in the model input. To improve volcanic ash ATD models, a reference database for volcanic eruptions is needed, covering many volcanoes. The database should include three major components: (1) eruption source, (2) ash observations, and (3) analyses meteorology. In addition, information on aggregation or other ash particle transformation processes would be useful.

  14. Re-using biological devices: a model-aided analysis of interconnected transcriptional cascades designed from the bottom-up.

    PubMed

    Pasotti, Lorenzo; Bellato, Massimo; Casanova, Michela; Zucca, Susanna; Cusella De Angelis, Maria Gabriella; Magni, Paolo

    2017-01-01

    The study of simplified, ad-hoc constructed model systems can help to elucidate if quantitatively characterized biological parts can be effectively re-used in composite circuits to yield predictable functions. Synthetic systems designed from the bottom-up can enable the building of complex interconnected devices via rational approach, supported by mathematical modelling. However, such process is affected by different, usually non-modelled, unpredictability sources, like cell burden. Here, we analyzed a set of synthetic transcriptional cascades in Escherichia coli . We aimed to test the predictive power of a simple Hill function activation/repression model (no-burden model, NBM) and of a recently proposed model, including Hill functions and the modulation of proteins expression by cell load (burden model, BM). To test the bottom-up approach, the circuit collection was divided into training and test sets, used to learn individual component functions and test the predicted output of interconnected circuits, respectively. Among the constructed configurations, two test set circuits showed unexpected logic behaviour. Both NBM and BM were able to predict the quantitative output of interconnected devices with expected behaviour, but only the BM was also able to predict the output of one circuit with unexpected behaviour. Moreover, considering training and test set data together, the BM captures circuits output with higher accuracy than the NBM, which is unable to capture the experimental output exhibited by some of the circuits even qualitatively. Finally, resource usage parameters, estimated via BM, guided the successful construction of new corrected variants of the two circuits showing unexpected behaviour. Superior descriptive and predictive capabilities were achieved considering resource limitation modelling, but further efforts are needed to improve the accuracy of models for biological engineering.

  15. Abundance and recruitment data for Undaria pinnatifida in Brest harbour, France: Model versus field results.

    PubMed

    Murphy, James T; Voisin, Marie; Johnson, Mark; Viard, Frédérique

    2016-06-01

    The data presented in this article are related to the research article entitled "A modelling approach to explore the critical environmental parameters influencing the growth and establishment of the invasive seaweed Undaria pinnatifida in Europe" [1]. This article describes raw simulation data output from a novel individual-based model of the invasive kelp species Undaria pinnatifida. It also includes field data of monthly abundance and recruitment values for a population of invasive U. pinnatifida (in Brest harbour, France) that were used to validate the model. The raw model output and field data are made publicly available in order to enable critical analysis of the model predictions and to inform future modelling efforts of the study species.

  16. Real-time motion compensation for EM bronchoscope tracking with smooth output - ex-vivo validation

    NASA Astrophysics Data System (ADS)

    Reichl, Tobias; Gergel, Ingmar; Menzel, Manuela; Hautmann, Hubert; Wegner, Ingmar; Meinzer, Hans-Peter; Navab, Nassir

    2012-02-01

    Navigated bronchoscopy provides benefits for endoscopists and patients, but accurate tracking information is needed. We present a novel real-time approach for bronchoscope tracking combining electromagnetic (EM) tracking, airway segmentation, and a continuous model of output. We augment a previously published approach by including segmentation information in the tracking optimization instead of image similarity. Thus, the new approach is feasible in real-time. Since the true bronchoscope trajectory is continuous, the output is modeled using splines and the control points are optimized with respect to displacement from EM tracking measurements and spatial relation to segmented airways. Accuracy of the proposed method and its components is evaluated on a ventilated porcine ex-vivo lung with respect to ground truth data acquired from a human expert. We demonstrate the robustness of the output of the proposed method against added artificial noise in the input data. Smoothness in terms of inter-frame distance is shown to remain below 2 mm, even when up to 5 mm of Gaussian noise are added to the input. The approach is shown to be easily extensible to include other measures like image similarity.

  17. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  18. Optimization of diode-pumped doubly QML laser with neodymium-doped vanadate crystals at 1.34 μm

    NASA Astrophysics Data System (ADS)

    Zhang, Gang; Jiao, Zhiyong

    2018-05-01

    We present a theoretical model for a diode-pumped, 1.34 μm V3+:YAG laser that is equipped with an acoustic-optic modulator. The model includes the loss introduced by the acoustic-optic modulator combined with the physical properties of the laser resonator, the neodymium-doped vanadate crystals and the output coupler. The parameters are adjusted within a reasonable range to optimize the pulse output characteristics. A typical Q-switched and mode-locked Nd:Lu0.15Y0.85VO4 laser at 1.34 μm with acoustic-optic modulator and V3+:YAG is set up, and the experimental output characteristics are consistent with the theoretical simulation results.

  19. Sensitivity Analysis of the Integrated Medical Model for ISS Programs

    NASA Technical Reports Server (NTRS)

    Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.

    2016-01-01

    Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.

  20. Output levels of commercially available portable compact disc players and the potential risk to hearing.

    PubMed

    Fligor, Brian J; Cox, L Clarke

    2004-12-01

    To measure the sound levels generated by the headphones of commercially available portable compact disc players and provide hearing healthcare providers with safety guidelines based on a theoretical noise dose model. Using a Knowles Electronics Manikin for Acoustical Research and a personal computer, output levels across volume control settings were recorded from headphones driven by a standard signal (white noise) and compared with output levels from music samples of eight different genres. Many commercially available models from different manufacturers were investigated. Several different styles of headphones (insert, supra-aural, vertical, and circumaural) were used to determine if style of headphone influenced output level. Free-field equivalent sound pressure levels measured at maximum volume control setting ranged from 91 dBA to 121 dBA. Output levels varied across manufacturers and style of headphone, although generally the smaller the headphone, the higher the sound level for a given volume control setting. Specifically, in one manufacturer, insert earphones increased output level 7-9 dB, relative to the output from stock headphones included in the purchase of the CD player. In a few headphone-CD player combinations, peak sound pressure levels exceeded 130 dB SPL. Based on measured sound pressure levels across systems and the noise dose model recommended by National Institute for Occupational Safety and Health for protecting the occupational worker, a maximum permissible noise dose would typically be reached within 1 hr of listening with the volume control set to 70% of maximum gain using supra-aural headphones. Using headphones that resulted in boosting the output level (e.g., insert earphones used in this study) would significantly decrease the maximum safe volume control setting; this effect was unpredictable from one manufacturer to another. In the interest of providing a straightforward recommendation that should protect the hearing of the majority of consumers, reasonable guidelines would include a recommendation to limit headphone use to 1 hr or less per day if using supra-aural style headphones at a gain control setting of 60% of maximum.

  1. An analytical approach to thermal modeling of Bridgman type crystal growth: One dimensional analysis. Computer program users manual

    NASA Technical Reports Server (NTRS)

    Cothran, E. K.

    1982-01-01

    The computer program written in support of one dimensional analytical approach to thermal modeling of Bridgman type crystal growth is presented. The program listing and flow charts are included, along with the complete thermal model. Sample problems include detailed comments on input and output to aid the first time user.

  2. Development of model reference adaptive control theory for electric power plant control applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mabius, L.E.

    1982-09-15

    The scope of this effort includes the theoretical development of a multi-input, multi-output (MIMO) Model Reference Control (MRC) algorithm, (i.e., model following control law), Model Reference Adaptive Control (MRAC) algorithm and the formulation of a nonlinear model of a typical electric power plant. Previous single-input, single-output MRAC algorithm designs have been generalized to MIMO MRAC designs using the MIMO MRC algorithm. This MRC algorithm, which has been developed using Command Generator Tracker methodologies, represents the steady state behavior (in the adaptive sense) of the MRAC algorithm. The MRC algorithm is a fundamental component in the MRAC design and stability analysis.more » An enhanced MRC algorithm, which has been developed for systems with more controls than regulated outputs, alleviates the MRC stability constraint of stable plant transmission zeroes. The nonlinear power plant model is based on the Cromby model with the addition of a governor valve management algorithm, turbine dynamics and turbine interactions with extraction flows. An application of the MRC algorithm to a linearization of this model demonstrates its applicability to power plant systems. In particular, the generated power changes at 7% per minute while throttle pressure and temperature, reheat temperature and drum level are held constant with a reasonable level of control. The enhanced algorithm reduces significantly control fluctuations without modifying the output response.« less

  3. ACIRF user's guide: Theory and examples

    NASA Astrophysics Data System (ADS)

    Dana, Roger A.

    1989-12-01

    Design and evaluation of radio frequency systems that must operate through ionospheric disturbances resulting from high altitude nuclear detonations requires an accurate channel model. This model must include the effects of high gain antennas that may be used to receive the signals. Such a model can then be used to construct realizations of the received signal for use in digital simulations of trans-ionospheric links or for use in hardware channel simulators. The FORTRAN channel model ACIRF (Antenna Channel Impulse Response Function) generates random realizations of the impulse response function at the outputs of multiple antennas. This user's guide describes the FORTRAN program ACIRF (version 2.0) that generates realizations of channel impulse response functions at the outputs of multiple antennas with arbitrary beamwidths, pointing angles, and relatives positions. This channel model is valid under strong scattering conditions when Rayleigh fading statistics apply. Both frozen-in and turbulent models for the temporal fluctuations are included in this version of ACIRF. The theory of the channel model is described and several examples are given.

  4. SARAH 3.2: Dirac gauginos, UFO output, and more

    NASA Astrophysics Data System (ADS)

    Staub, Florian

    2013-07-01

    SARAH is a Mathematica package optimized for the fast, efficient and precise study of supersymmetric models beyond the MSSM: a new model can be defined in a short form and all vertices are derived. This allows SARAH to create model files for FeynArts/FormCalc, CalcHep/CompHep and WHIZARD/O'Mega. The newest version of SARAH now provides the possibility to create model files in the UFO format which is supported by MadGraph 5, MadAnalysis 5, GoSam, and soon by Herwig++. Furthermore, SARAH also calculates the mass matrices, RGEs and 1-loop corrections to the mass spectrum. This information is used to write source code for SPheno in order to create a precision spectrum generator for the given model. This spectrum-generator-generator functionality as well as the output of WHIZARD and CalcHep model files has seen further improvement in this version. Also models including Dirac gauginos are supported with the new version of SARAH, and additional checks for the consistency of the implementation of new models have been created. Program summaryProgram title:SARAH Catalogue identifier: AEIB_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEIB_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3 22 411 No. of bytes in distributed program, including test data, etc.: 3 629 206 Distribution format: tar.gz Programming language: Mathematica. Computer: All for which Mathematica is available. Operating system: All for which Mathematica is available. Classification: 11.1, 11.6. Catalogue identifier of previous version: AEIB_v1_0 Journal reference of previous version: Comput. Phys. Comm. 182 (2011) 808 Does the new version supersede the previous version?: Yes, the new version includes all known features of the previous version but also provides the new features mentioned below. Nature of problem: To use Madgraph for new models it is necessary to provide the corresponding model files which include all information about the interactions of the model. However, the derivation of the vertices for a given model and putting those into model files which can be used with Madgraph is usually very time consuming. Dirac gauginos are not present in the minimal supersymmetric standard model (MSSM) or many extensions of it. Dirac mass terms for vector superfields lead to new structures in the supersymmetric (SUSY) Lagrangian (bilinear mass term between gaugino and matter fermion as well as new D-terms) and modify also the SUSY renormalization group equations (RGEs). The Dirac character of gauginos can change the collider phenomenology. In addition, they come with an extended Higgs sector for which a precise calculation of the 1-loop masses has not happened so far. Solution method: SARAH calculates the complete Lagrangian for a given model whose gauge sector can be any direct product of SU(N) gauge groups. The chiral superfields can transform as any, irreducible representation with respect to these gauge groups and it is possible to handle an arbitrary number of symmetry breakings or particle rotations. Also the gauge fixing is automatically added. Using this information, SARAH derives all vertices for a model. These vertices can be exported to model files in the UFO which is supported by Madgraph and other codes like GoSam, MadAnalysis or ALOHA. The user can also study models with Dirac gauginos. In that case SARAH includes all possible terms in the Lagrangian stemming from the new structures and can also calculate the RGEs. The entire impact of these terms is then taken into account in the output of SARAH to UFO, CalcHep, WHIZARD, FeynArts and SPheno. Reasons for new version: SARAH provides, with this version, the possibility of creating model files in the UFO format. The UFO format is supposed to become a standard format for model files which should be supported by many different tools in the future. Also models with Dirac gauginos were not supported in earlier versions. Summary of revisions: Support of models with Dirac gauginos. Output of model files in the UFO format, speed improvement in the output of WHIZARD model files, CalcHep output supports the internal diagonalization of mass matrices, output of control files for LHPC spectrum plotter, support of generalized PDG numbering scheme PDG.IX, improvement of the calculation of the decay widths and branching ratios with SPheno, the calculation of new low energy observables are added to the SPheno output, the handling of gauge fixing terms has been significantly simplified. Restrictions: SARAH can only derive the Lagrangian in an automatized way for N=1 SUSY models, but not for those with more SUSY generators. Furthermore, SARAH supports only renormalizable operators in the output of model files in the UFO format and also for CalcHep, FeynArts and WHIZARD. Also color sextets are not yet included in the model files for Monte Carlo tools. Dimension 5 operators are only supported in the calculation of the RGEs and mass matrices. Unusual features: SARAH does not need the Lagrangian of a model as input to calculate the vertices. The gauge structure, particle and content and superpotential as well as rotations stemming from gauge symmetry breaking are sufficient. All further information is derived by SARAH on its own. Therefore, the model files are very short and the implementation of new models is fast and easy. In addition, the implementation of a model can be checked for physical and formal consistency. In addition, SARAH can generate Fortran code for a full 1-loop analysis of the mass spectrum in the context for Dirac gauginos. Running time: Measured CPU time for the evaluation of the MSSM using a Lenovo Thinkpad X220 with i7 processor (2.53 GHz). Calculating the complete Lagrangian: 9 s. Calculating all vertices: 51 s. Output of the UFO model files: 49 s.

  5. Dynamic Simulation of Human Gait Model With Predictive Capability.

    PubMed

    Sun, Jinming; Wu, Shaoli; Voglewede, Philip A

    2018-03-01

    In this paper, it is proposed that the central nervous system (CNS) controls human gait using a predictive control approach in conjunction with classical feedback control instead of exclusive classical feedback control theory that controls based on past error. To validate this proposition, a dynamic model of human gait is developed using a novel predictive approach to investigate the principles of the CNS. The model developed includes two parts: a plant model that represents the dynamics of human gait and a controller that represents the CNS. The plant model is a seven-segment, six-joint model that has nine degrees-of-freedom (DOF). The plant model is validated using data collected from able-bodied human subjects. The proposed controller utilizes model predictive control (MPC). MPC uses an internal model to predict the output in advance, compare the predicted output to the reference, and optimize the control input so that the predicted error is minimal. To decrease the complexity of the model, two joints are controlled using a proportional-derivative (PD) controller. The developed predictive human gait model is validated by simulating able-bodied human gait. The simulation results show that the developed model is able to simulate the kinematic output close to experimental data.

  6. A programmable power processor for high power space applications

    NASA Technical Reports Server (NTRS)

    Lanier, J. R., Jr.; Graves, J. R.; Kapustka, R. E.; Bush, J. R., Jr.

    1982-01-01

    A Programmable Power Processor (P3) has been developed for application in future large space power systems. The P3 is capable of operation over a wide range of input voltage (26 to 375 Vdc) and output voltage (24 to 180 Vdc). The peak output power capability is 18 kW (180 V at 100 A). The output characteristics of the P3 can be programmed to any voltage and/or current level within the limits of the processor and may be controlled as a function of internal or external parameters. Seven breadboard P3s and one 'flight-type' engineering model P3 have been built and tested both individually and in electrical power systems. The programmable feature allows the P3 to be used in a variety of applications by changing the output characteristics. Test results, including efficiency at various input/output combinations, transient response, and output impedance, are presented.

  7. Wrapping Python around MODFLOW/MT3DMS based groundwater models

    NASA Astrophysics Data System (ADS)

    Post, V.

    2008-12-01

    Numerical models that simulate groundwater flow and solute transport require a great amount of input data that is often organized into different files. A large proportion of the input data consists of spatially-distributed model parameters. The model output consists of a variety data such as heads, fluxes and concentrations. Typically all files have different formats. Consequently, preparing input and managing output is a complex and error-prone task. Proprietary software tools are available that facilitate the preparation of input files and analysis of model outcomes. The use of such software may be limited if it does not support all the features of the groundwater model or when the costs of such tools are prohibitive. Therefore a Python library was developed that contains routines to generate input files and process output files of MODFLOW/MT3DMS based models. The library is freely available and has an open structure so that the routines can be customized and linked into other scripts and libraries. The current set of functions supports the generation of input files for MODFLOW and MT3DMS, including the capability to read spatially-distributed input parameters (e.g. hydraulic conductivity) from PNG files. Both ASCII and binary output files can be read efficiently allowing for visualization of, for example, solute concentration patterns in contour plots with superimposed flow vectors using matplotlib. Series of contour plots are then easily saved as an animation. The subroutines can also be used within scripts to calculate derived quantities such as the mass of a solute within a particular region of the model domain. Using Python as a wrapper around groundwater models provides an efficient and flexible way of processing input and output data, which is not constrained by limitations of third-party products.

  8. Evaluation of simulated ocean carbon in the CMIP5 earth system models

    NASA Astrophysics Data System (ADS)

    Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent

    2013-04-01

    We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.

  9. Fallon, Nevada FORGE Distinct Element Reservoir Modeling

    DOE Data Explorer

    Blankenship, Doug; Pettitt, Will; Riahi, Azadeh; Hazzard, Jim; Blanksma, Derrick

    2018-03-12

    Archive containing input/output data for distinct element reservoir modeling for Fallon FORGE. Models created using 3DEC, InSite, and in-house Python algorithms (ITASCA). List of archived files follows; please see 'Modeling Metadata.pdf' (included as a resource below) for additional file descriptions. Data sources include regional geochemical model, well positions and geometry, principal stress field, capability for hydraulic fractures, capability for hydro-shearing, reservoir geomechanical model-stimulation into multiple zones, modeled thermal behavior during circulation, and microseismicity.

  10. User's Guide to the Stand Prognosis Model

    Treesearch

    William R. Wykoff; Nicholas L. Crookston; Albert R. Stage

    1982-01-01

    The Stand Prognosis Model is a computer program that projects the development of forest stands in the Northern Rocky Mountains. Thinning options allow for simulation of a variety of management strategies. Input consists of a stand inventory, including sample tree records, and a set of option selection instructions. Output includes data normally found in stand, stock,...

  11. Langrangian model of nitrogen kinetics in the Chattahoochee river

    USGS Publications Warehouse

    Jobson, H.E.

    1987-01-01

    A Lagrangian reference frame is used to solve the convection-dispersion equation and interpret water-quality obtained from the Chattahoochee River. The model was calibrated using unsteady concentrations of organic nitrogen, ammonia, and nitrite plus nitrate obtained during June 1977 and verified using data obtained during August 1976. Reaction kinetics of the cascade type are shown to provide a reasonable description of the nitrogen-species processes in the Chattahoochee River. The conceptual model is easy to visualize in the physical sense and the output includes information that is not easily determined from an Eulerian approach, but which is very helpful in model calibration and data interpretation. For example, the model output allows one to determine which data are of most value in model calibration or verification.

  12. Global Sensitivity Analysis of Environmental Systems via Multiple Indices based on Statistical Moments of Model Outputs

    NASA Astrophysics Data System (ADS)

    Guadagnini, A.; Riva, M.; Dell'Oca, A.

    2017-12-01

    We propose to ground sensitivity of uncertain parameters of environmental models on a set of indices based on the main (statistical) moments, i.e., mean, variance, skewness and kurtosis, of the probability density function (pdf) of a target model output. This enables us to perform Global Sensitivity Analysis (GSA) of a model in terms of multiple statistical moments and yields a quantification of the impact of model parameters on features driving the shape of the pdf of model output. Our GSA approach includes the possibility of being coupled with the construction of a reduced complexity model that allows approximating the full model response at a reduced computational cost. We demonstrate our approach through a variety of test cases. These include a commonly used analytical benchmark, a simplified model representing pumping in a coastal aquifer, a laboratory-scale tracer experiment, and the migration of fracturing fluid through a naturally fractured reservoir (source) to reach an overlying formation (target). Our strategy allows discriminating the relative importance of model parameters to the four statistical moments considered. We also provide an appraisal of the error associated with the evaluation of our sensitivity metrics by replacing the original system model through the selected surrogate model. Our results suggest that one might need to construct a surrogate model with increasing level of accuracy depending on the statistical moment considered in the GSA. The methodological framework we propose can assist the development of analysis techniques targeted to model calibration, design of experiment, uncertainty quantification and risk assessment.

  13. Analysis of inter-country input-output table based on bibliographic coupling network: How industrial sectors on the GVC compete for production resources

    NASA Astrophysics Data System (ADS)

    Guan, Jun; Xu, Xiaoyu; Xing, Lizhi

    2018-03-01

    The input-output table is comprehensive and detailed in describing national economic systems with abundance of economic relationships depicting information of supply and demand among industrial sectors. This paper focuses on how to quantify the degree of competition on the global value chain (GVC) from the perspective of econophysics. Global Industrial Strongest Relevant Network models are established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output (ICIO) tables and then have them transformed into Global Industrial Resource Competition Network models to analyze the competitive relationships based on bibliographic coupling approach. Three indicators well suited for the weighted and undirected networks with self-loops are introduced here, including unit weight for competitive power, disparity in the weight for competitive amplitude and weighted clustering coefficient for competitive intensity. Finally, these models and indicators were further applied empirically to analyze the function of industrial sectors on the basis of the latest World Input-Output Database (WIOD) in order to reveal inter-sector competitive status during the economic globalization.

  14. Use of Regional Climate Model Output for Hydrologic Simulations

    NASA Astrophysics Data System (ADS)

    Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.

    2001-12-01

    Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  15. Mars Global Reference Atmospheric Model (Mars-GRAM 3.34): Programmer's Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie F.; Johnson, Dale L.

    1996-01-01

    This is a programmer's guide for the Mars Global Reference Atmospheric Model (Mars-GRAM 3.34). Included are a brief history and review of the model since its origin in 1988 and a technical discussion of recent additions and modifications. Examples of how to run both the interactive and batch (subroutine) forms are presented. Instructions are provided on how to customize output of the model for various parameters of the Mars atmosphere. Detailed descriptions are given of the main driver programs, subroutines, and associated computational methods. Lists and descriptions include input, output, and local variables in the programs. These descriptions give a summary of program steps and 'map' of calling relationships among the subroutines. Definitions are provided for the variables passed between subroutines through common lists. Explanations are provided for all diagnostic and progress messages generated during execution of the program. A brief outline of future plans for Mars-GRAM is also presented.

  16. TWINTN4: A program for transonic four-wall interference assessment in two-dimensional wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1984-01-01

    A method for assessing the wall interference in transonic two-dimensional wind tunnel tests including the effects of the tunnel sidewall boundary layer was developed and implemented in a computer program named TWINTN4. The method involves three successive solutions of the transonic small disturbance potential equation to define the wind tunnel flow, the equivalent free air flow around the model, and the perturbation attributable to the model. Required input includes pressure distributions on the model and along the top and bottom tunnel walls which are used as boundary conditions for the wind tunnel flow. The wall-induced perturbation field is determined as the difference between the perturbation in the tunnel flow solution and the perturbation attributable to the model. The methodology used in the program is described and detailed descriptions of the computer program input and output are presented. Input and output for a sample case are given.

  17. Dynamic Modeling and Very Short-term Prediction of Wind Power Output Using Box-Cox Transformation

    NASA Astrophysics Data System (ADS)

    Urata, Kengo; Inoue, Masaki; Murayama, Dai; Adachi, Shuichi

    2016-09-01

    We propose a statistical modeling method of wind power output for very short-term prediction. The modeling method with a nonlinear model has cascade structure composed of two parts. One is a linear dynamic part that is driven by a Gaussian white noise and described by an autoregressive model. The other is a nonlinear static part that is driven by the output of the linear part. This nonlinear part is designed for output distribution matching: we shape the distribution of the model output to match with that of the wind power output. The constructed model is utilized for one-step ahead prediction of the wind power output. Furthermore, we study the relation between the prediction accuracy and the prediction horizon.

  18. An Instructional Approach to Modeling in Microevolution.

    ERIC Educational Resources Information Center

    Thompson, Steven R.

    1988-01-01

    Describes an approach to teaching population genetics and evolution and some of the ways models can be used to enhance understanding of the processes being studied. Discusses the instructional plan, and the use of models including utility programs and analysis with models. Provided are a basic program and sample program outputs. (CW)

  19. Simulated and measured neutron/gamma light output distribution for poly-energetic neutron/gamma sources

    NASA Astrophysics Data System (ADS)

    Hosseini, S. A.; Zangian, M.; Aghabozorgi, S.

    2018-03-01

    In the present paper, the light output distribution due to poly-energetic neutron/gamma (neutron or gamma) source was calculated using the developed MCNPX-ESUT-PE (MCNPX-Energy engineering of Sharif University of Technology-Poly Energetic version) computational code. The simulation of light output distribution includes the modeling of the particle transport, the calculation of scintillation photons induced by charged particles, simulation of the scintillation photon transport and considering the light resolution obtained from the experiment. The developed computational code is able to simulate the light output distribution due to any neutron/gamma source. In the experimental step of the present study, the neutron-gamma discrimination based on the light output distribution was performed using the zero crossing method. As a case study, 241Am-9Be source was considered and the simulated and measured neutron/gamma light output distributions were compared. There is an acceptable agreement between the discriminated neutron/gamma light output distributions obtained from the simulation and experiment.

  20. Mode conversion efficiency to Laguerre-Gaussian OAM modes using spiral phase optics.

    PubMed

    Longman, Andrew; Fedosejevs, Robert

    2017-07-24

    An analytical model for the conversion efficiency from a TEM 00 mode to an arbitrary Laguerre-Gaussian (LG) mode with null radial index spiral phase optics is presented. We extend this model to include the effects of stepped spiral phase optics, spiral phase optics of non-integer topological charge, and the reduction in conversion efficiency due to broad laser bandwidth. We find that through optimization, an optimal beam waist ratio of the input and output modes exists and is dependent upon the output azimuthal mode number.

  1. User's guide for a large signal computer model of the helical traveling wave tube

    NASA Technical Reports Server (NTRS)

    Palmer, Raymond W.

    1992-01-01

    The use is described of a successful large-signal, two-dimensional (axisymmetric), deformable disk computer model of the helical traveling wave tube amplifier, an extensively revised and operationally simplified version. We also discuss program input and output and the auxiliary files necessary for operation. Included is a sample problem and its input data and output results. Interested parties may now obtain from the author the FORTRAN source code, auxiliary files, and sample input data on a standard floppy diskette, the contents of which are described herein.

  2. Multiple piezo-patch energy harvesters integrated to a thin plate with AC-DC conversion: analytical modeling and numerical validation

    NASA Astrophysics Data System (ADS)

    Aghakhani, Amirreza; Basdogan, Ipek; Erturk, Alper

    2016-04-01

    Plate-like components are widely used in numerous automotive, marine, and aerospace applications where they can be employed as host structures for vibration based energy harvesting. Piezoelectric patch harvesters can be easily attached to these structures to convert the vibrational energy to the electrical energy. Power output investigations of these harvesters require accurate models for energy harvesting performance evaluation and optimization. Equivalent circuit modeling of the cantilever-based vibration energy harvesters for estimation of electrical response has been proposed in recent years. However, equivalent circuit formulation and analytical modeling of multiple piezo-patch energy harvesters integrated to thin plates including nonlinear circuits has not been studied. In this study, equivalent circuit model of multiple parallel piezoelectric patch harvesters together with a resistive load is built in electronic circuit simulation software SPICE and voltage frequency response functions (FRFs) are validated using the analytical distributedparameter model. Analytical formulation of the piezoelectric patches in parallel configuration for the DC voltage output is derived while the patches are connected to a standard AC-DC circuit. The analytic model is based on the equivalent load impedance approach for piezoelectric capacitance and AC-DC circuit elements. The analytic results are validated numerically via SPICE simulations. Finally, DC power outputs of the harvesters are computed and compared with the peak power amplitudes in the AC output case.

  3. User guide for MODPATH version 6 - A particle-tracking model for MODFLOW

    USGS Publications Warehouse

    Pollock, David W.

    2012-01-01

    MODPATH is a particle-tracking post-processing model that computes three-dimensional flow paths using output from groundwater flow simulations based on MODFLOW, the U.S. Geological Survey (USGS) finite-difference groundwater flow model. This report documents MODPATH version 6. Previous versions were documented in USGS Open-File Reports 89-381 and 94-464. The program uses a semianalytical particle-tracking scheme that allows an analytical expression of a particle's flow path to be obtained within each finite-difference grid cell. A particle's path is computed by tracking the particle from one cell to the next until it reaches a boundary, an internal sink/source, or satisfies another termination criterion. Data input to MODPATH consists of a combination of MODFLOW input data files, MODFLOW head and flow output files, and other input files specific to MODPATH. Output from MODPATH consists of several output files, including a number of particle coordinate output files intended to serve as input data for other programs that process, analyze, and display the results in various ways. MODPATH is written in FORTRAN and can be compiled by any FORTRAN compiler that fully supports FORTRAN-2003 or by most commercially available FORTRAN-95 compilers that support the major FORTRAN-2003 language extensions.

  4. A simulation model for studying the role of pre-slaughter factors on the exposure of beef carcasses to human microbial hazards.

    PubMed

    Jordan, D; McEwen, S A; Lammerding, A M; McNab, W B; Wilson, J B

    1999-06-29

    A Monte Carlo simulation model was constructed for assessing the quantity of microbial hazards deposited on cattle carcasses under different pre-slaughter management regimens. The model permits comparison of industry-wide and abattoir-based mitigation strategies and is suitable for studying pathogens such as Escherichia coli O157:H7 and Salmonella spp. Simulations are based on a hierarchical model structure that mimics important aspects of the cattle population prior to slaughter. Stochastic inputs were included so that uncertainty about important input assumptions (such as prevalence of a human pathogen in the live cattle-population) would be reflected in model output. Control options were built into the model to assess the benefit of having prior knowledge of animal or herd-of-origin pathogen status (obtained from the use of a diagnostic test). Similarly, a facility was included for assessing the benefit of re-ordering the slaughter sequence based on the extent of external faecal contamination. Model outputs were designed to evaluate the performance of an abattoir in a 1-day period and included outcomes such as the proportion of carcasses contaminated with a pathogen, the daily mean and selected percentiles of pathogen counts per carcass, and the position of the first infected animal in the slaughter run. A measure of the time rate of introduction of pathogen into the abattoir was provided by assessing the median, 5th percentile, and 95th percentile cumulative pathogen counts at 10 equidistant points within the slaughter run. Outputs can be graphically displayed as frequency distributions, probability densities, cumulative distributions or x-y plots. The model shows promise as an inexpensive method for evaluating pathogen control strategies such as those forming part of a Hazard Analysis and Critical Control Point (HACCP) system.

  5. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2012-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk : FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  6. Active Management of Integrated Geothermal-CO2 Storage Reservoirs in Sedimentary Formations

    DOE Data Explorer

    Buscheck, Thomas A.

    2000-01-01

    Active Management of Integrated Geothermal–CO2 Storage Reservoirs in Sedimentary Formations: An Approach to Improve Energy Recovery and Mitigate Risk: FY1 Final Report The purpose of phase 1 is to determine the feasibility of integrating geologic CO2 storage (GCS) with geothermal energy production. Phase 1 includes reservoir analyses to determine injector/producer well schemes that balance the generation of economically useful flow rates at the producers with the need to manage reservoir overpressure to reduce the risks associated with overpressure, such as induced seismicity and CO2 leakage to overlying aquifers. This submittal contains input and output files of the reservoir model analyses. A reservoir-model "index-html" file was sent in a previous submittal to organize the reservoir-model input and output files according to sections of the FY1 Final Report to which they pertain. The recipient should save the file: Reservoir-models-inputs-outputs-index.html in the same directory that the files: Section2.1.*.tar.gz files are saved in.

  7. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  8. Automotive Gas Turbine Power System-Performance Analysis Code

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1997-01-01

    An open cycle gas turbine numerical modelling code suitable for thermodynamic performance analysis (i.e. thermal efficiency, specific fuel consumption, cycle state points, working fluid flowrates etc.) of automotive and aircraft powerplant applications has been generated at the NASA Lewis Research Center's Power Technology Division. The use this code can be made available to automotive gas turbine preliminary design efforts, either in its present version, or, assuming that resources can be obtained to incorporate empirical models for component weight and packaging volume, in later version that includes the weight-volume estimator feature. The paper contains a brief discussion of the capabilities of the presently operational version of the code, including a listing of input and output parameters and actual sample output listings.

  9. General Circulation Model Output for Forest Climate Change Research and Applications

    Treesearch

    Ellen J. Cooter; Brian K. Eder; Sharon K. LeDuc; Lawrence Truppi

    1993-01-01

    This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made. This report reviews technical aspects of and summarizes output from four climate models. Recommendations concerning the use of these outputs in forest impact assessments are made.

  10. An Advanced simulation Code for Modeling Inductive Output Tubes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thuc Bui; R. Lawrence Ives

    2012-04-27

    During the Phase I program, CCR completed several major building blocks for a 3D large signal, inductive output tube (IOT) code using modern computer language and programming techniques. These included a 3D, Helmholtz, time-harmonic, field solver with a fully functional graphical user interface (GUI), automeshing and adaptivity. Other building blocks included the improved electrostatic Poisson solver with temporal boundary conditions to provide temporal fields for the time-stepping particle pusher as well as the self electric field caused by time-varying space charge. The magnetostatic field solver was also updated to solve for the self magnetic field caused by time changing currentmore » density in the output cavity gap. The goal function to optimize an IOT cavity was also formulated, and the optimization methodologies were investigated.« less

  11. Role of intraglomerular circuits in shaping temporally structured responses to naturalistic inhalation-driven sensory input to the olfactory bulb

    PubMed Central

    Carey, Ryan M.; Sherwood, William Erik; Shipley, Michael T.; Borisyuk, Alla

    2015-01-01

    Olfaction in mammals is a dynamic process driven by the inhalation of air through the nasal cavity. Inhalation determines the temporal structure of sensory neuron responses and shapes the neural dynamics underlying central olfactory processing. Inhalation-linked bursts of activity among olfactory bulb (OB) output neurons [mitral/tufted cells (MCs)] are temporally transformed relative to those of sensory neurons. We investigated how OB circuits shape inhalation-driven dynamics in MCs using a modeling approach that was highly constrained by experimental results. First, we constructed models of canonical OB circuits that included mono- and disynaptic feedforward excitation, recurrent inhibition and feedforward inhibition of the MC. We then used experimental data to drive inputs to the models and to tune parameters; inputs were derived from sensory neuron responses during natural odorant sampling (sniffing) in awake rats, and model output was compared with recordings of MC responses to odorants sampled with the same sniff waveforms. This approach allowed us to identify OB circuit features underlying the temporal transformation of sensory inputs into inhalation-linked patterns of MC spike output. We found that realistic input-output transformations can be achieved independently by multiple circuits, including feedforward inhibition with slow onset and decay kinetics and parallel feedforward MC excitation mediated by external tufted cells. We also found that recurrent and feedforward inhibition had differential impacts on MC firing rates and on inhalation-linked response dynamics. These results highlight the importance of investigating neural circuits in a naturalistic context and provide a framework for further explorations of signal processing by OB networks. PMID:25717156

  12. Finite Element Modeling of Passive Material Influence on the Deformation and Force Output of Skeletal Muscle

    PubMed Central

    Hodgson, John A.; Chi, Sheng-Wei; Yang, Judy P.; Chen, Jiun-Shyan; Edgerton, V. Reggie; Sinha, Shantanu

    2014-01-01

    The pattern of deformation of the different structural components of a muscle-tendon complex when it is activated provides important information about the internal mechanics of the muscle. Recent experimental observations of deformations in contracting muscle have presented inconsistencies with current widely held assumption about muscle behavior. These include negative strain in aponeuroses, non-uniform strain changes in sarcomeres, even of individual muscle fibers and evidence that muscle fiber cross sectional deformations are asymmetrical suggesting a need to readjust current models of contracting muscle. We report here our use of finite element modeling techniques to simulate a simple muscle-tendon complex and investigate the influence of passive intramuscular material properties upon the deformation patterns under isometric and shortening conditions. While phenomenological force-displacement relationships described the muscle fiber properties, the material properties of the passive matrix were varied to simulate a hydrostatic model, compliant and stiff isotropically hyperelastic models and an anisotropic elastic model. The numerical results demonstrate that passive elastic material properties significantly influence the magnitude, heterogeneity and distribution pattern of many measures of deformation in a contracting muscle. Measures included aponeurosis strain, aponeurosis separation, muscle fiber strain and fiber cross-sectional deformation. The force output of our simulations was strongly influenced by passive material properties, changing by as much as ~80% under some conditions. Maximum output was accomplished by introducing anisotropy along axes which were not strained significantly during a muscle length change, suggesting that correct costamere orientation may be a critical factor in optimal muscle function. Such a model not only fits known physiological data, but also maintains the relatively constant aponeurosis separation observed during in vivo muscle contractions and is easily extrapolated from our plane-strain conditions into a 3-dimensional structure. Such modeling approaches have the potential of explaining the reduction of force output consequent to changes in material properties of intramuscular materials arising in the diseased state such as in genetic disorders. PMID:22498294

  13. Finite element modeling of passive material influence on the deformation and force output of skeletal muscle.

    PubMed

    Hodgson, John A; Chi, Sheng-Wei; Yang, Judy P; Chen, Jiun-Shyan; Edgerton, Victor R; Sinha, Shantanu

    2012-05-01

    The pattern of deformation of different structural components of a muscle-tendon complex when it is activated provides important information about the internal mechanics of the muscle. Recent experimental observations of deformations in contracting muscle have presented inconsistencies with current widely held assumption about muscle behavior. These include negative strain in aponeuroses, non-uniform strain changes in sarcomeres, even of individual muscle fibers and evidence that muscle fiber cross sectional deformations are asymmetrical suggesting a need to readjust current models of contracting muscle. We report here our use of finite element modeling techniques to simulate a simple muscle-tendon complex and investigate the influence of passive intramuscular material properties upon the deformation patterns under isometric and shortening conditions. While phenomenological force-displacement relationships described the muscle fiber properties, the material properties of the passive matrix were varied to simulate a hydrostatic model, compliant and stiff isotropically hyperelastic models and an anisotropic elastic model. The numerical results demonstrate that passive elastic material properties significantly influence the magnitude, heterogeneity and distribution pattern of many measures of deformation in a contracting muscle. Measures included aponeurosis strain, aponeurosis separation, muscle fiber strain and fiber cross-sectional deformation. The force output of our simulations was strongly influenced by passive material properties, changing by as much as ~80% under some conditions. The maximum output was accomplished by introducing anisotropy along axes which were not strained significantly during a muscle length change, suggesting that correct costamere orientation may be a critical factor in the optimal muscle function. Such a model not only fits known physiological data, but also maintains the relatively constant aponeurosis separation observed during in vivo muscle contractions and is easily extrapolated from our plane-strain conditions into a three-dimensional structure. Such modeling approaches have the potential of explaining the reduction of force output consequent to changes in material properties of intramuscular materials arising in the diseased state such as in genetic disorders. Copyright © 2012 Elsevier Ltd. All rights reserved.

  14. Determining and Communicating the Value of the Special Library.

    ERIC Educational Resources Information Center

    Matthews, Joseph R.

    2003-01-01

    Discusses performance measures for libraries that will indicate the goodness of the library and its services. Highlights include a general evaluation model that includes input, process, output, and outcome measures; balanced scorecard approach that includes financial perspectives; focusing on strategy; strategies for change; user criteria for…

  15. Applications of active adaptive noise control to jet engines

    NASA Technical Reports Server (NTRS)

    Shoureshi, Rahmat; Brackney, Larry

    1993-01-01

    During phase 2 research on the application of active noise control to jet engines, the development of multiple-input/multiple-output (MIMO) active adaptive noise control algorithms and acoustic/controls models for turbofan engines were considered. Specific goals for this research phase included: (1) implementation of a MIMO adaptive minimum variance active noise controller; and (2) turbofan engine model development. A minimum variance control law for adaptive active noise control has been developed, simulated, and implemented for single-input/single-output (SISO) systems. Since acoustic systems tend to be distributed, multiple sensors, and actuators are more appropriate. As such, the SISO minimum variance controller was extended to the MIMO case. Simulation and experimental results are presented. A state-space model of a simplified gas turbine engine is developed using the bond graph technique. The model retains important system behavior, yet is of low enough order to be useful for controller design. Expansion of the model to include multiple stages and spools is also discussed.

  16. Investigation of Analog Photonic Link Technology for Timing and Metrological Applications

    DTIC Science & Technology

    2015-05-18

    same model bias tee in each case. Fig. 1.8: Measured residual single-sideband (SSB) phase noise for two amplifiers with various RF pads at...deflection at the AO output. The deflected signal is reflected onto a tilted diffraction grating and passed backed through the device to the output...Other TTD modulation mechanisms have been considered including fiber stretches (mechanical and piezoelectric ), electro-optic modulators (i.e

  17. Space-Time Fusion Under Error in Computer Model Output: An Application to Modeling Air Quality

    EPA Science Inventory

    In the last two decades a considerable amount of research effort has been devoted to modeling air quality with public health objectives. These objectives include regulatory activities such as setting standards along with assessing the relationship between exposure to air pollutan...

  18. NONROAD2005 Training Presentation Slides, May 2006 15th International Emission Inventory Conference (New Orleans)

    EPA Pesticide Factsheets

    Learn about the 2005 update to the NONROAD emissions inventory model and its features and outputs, including hands-on exercises. Keep in mind that the most current model, approved for use in SIPs, is MOVES2014a which absorbed the latest NONROAD model.

  19. Solid rocket booster performance evaluation model. Volume 1: Engineering description

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The space shuttle solid rocket booster performance evaluation model (SRB-II) is made up of analytical and functional simulation techniques linked together so that a single pass through the model will predict the performance of the propulsion elements of a space shuttle solid rocket booster. The available options allow the user to predict static test performance, predict nominal and off nominal flight performance, and reconstruct actual flight and static test performance. Options selected by the user are dependent on the data available. These can include data derived from theoretical analysis, small scale motor test data, large motor test data and motor configuration data. The user has several options for output format that include print, cards, tape and plots. Output includes all major performance parameters (Isp, thrust, flowrate, mass accounting and operating pressures) as a function of time as well as calculated single point performance data. The engineering description of SRB-II discusses the engineering and programming fundamentals used, the function of each module, and the limitations of each module.

  20. Modifications of the U.S. Geological Survey modular, finite-difference, ground-water flow model to read and write geographic information system files

    USGS Publications Warehouse

    Orzol, Leonard L.; McGrath, Timothy S.

    1992-01-01

    This report documents modifications to the U.S. Geological Survey modular, three-dimensional, finite-difference, ground-water flow model, commonly called MODFLOW, so that it can read and write files used by a geographic information system (GIS). The modified model program is called MODFLOWARC. Simulation programs such as MODFLOW generally require large amounts of input data and produce large amounts of output data. Viewing data graphically, generating head contours, and creating or editing model data arrays such as hydraulic conductivity are examples of tasks that currently are performed either by the use of independent software packages or by tedious manual editing, manipulating, and transferring data. Programs such as GIS programs are commonly used to facilitate preparation of the model input data and analyze model output data; however, auxiliary programs are frequently required to translate data between programs. Data translations are required when different programs use different data formats. Thus, the user might use GIS techniques to create model input data, run a translation program to convert input data into a format compatible with the ground-water flow model, run the model, run a translation program to convert the model output into the correct format for GIS, and use GIS to display and analyze this output. MODFLOWARC, avoids the two translation steps and transfers data directly to and from the ground-water-flow model. This report documents the design and use of MODFLOWARC and includes instructions for data input/output of the Basic, Block-centered flow, River, Recharge, Well, Drain, Evapotranspiration, General-head boundary, and Streamflow-routing packages. The modification to MODFLOW and the Streamflow-Routing package was minimized. Flow charts and computer-program code describe the modifications to the original computer codes for each of these packages. Appendix A contains a discussion on the operation of MODFLOWARC using a sample problem.

  1. A distributed parameter model of transmission line transformer for high voltage nanosecond pulse generation

    NASA Astrophysics Data System (ADS)

    Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can

    2017-09-01

    A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.

  2. A distributed parameter model of transmission line transformer for high voltage nanosecond pulse generation.

    PubMed

    Li, Jiangtao; Zhao, Zheng; Li, Longjie; He, Jiaxin; Li, Chenjie; Wang, Yifeng; Su, Can

    2017-09-01

    A transmission line transformer has potential advantages for nanosecond pulse generation including excellent frequency response and no leakage inductance. The wave propagation process in a secondary mode line is indispensable due to an obvious inside transient electromagnetic transition in this scenario. The equivalent model of the transmission line transformer is crucial for predicting the output waveform and evaluating the effects of magnetic cores on output performance. However, traditional lumped parameter models are not sufficient for nanosecond pulse generation due to the natural neglect of wave propagations in secondary mode lines based on a lumped parameter assumption. In this paper, a distributed parameter model of transmission line transformer was established to investigate wave propagation in the secondary mode line and its influential factors through theoretical analysis and experimental verification. The wave propagation discontinuity in the secondary mode line induced by magnetic cores is emphasized. Characteristics of the magnetic core under a nanosecond pulse were obtained by experiments. Distribution and formation of the secondary mode current were determined for revealing essential wave propagation processes in secondary mode lines. The output waveform and efficiency were found to be affected dramatically by wave propagation discontinuity in secondary mode lines induced by magnetic cores. The proposed distributed parameter model was proved more suitable for nanosecond pulse generation in aspects of secondary mode current, output efficiency, and output waveform. In depth, comprehension of underlying mechanisms and a broader view of the working principle of the transmission line transformer for nanosecond pulse generation can be obtained through this research.

  3. Rice production model based on the concept of ecological footprint

    NASA Astrophysics Data System (ADS)

    Faiz, S. A.; Wicaksono, A. D.; Dinanti, D.

    2017-06-01

    Pursuant to what had been stated in Region Spatial Planning (RTRW) of Malang Regency for period 2010-2030, Malang Regency was considered as the center of agricultural development, including districts bordered with Malang City. To protect the region functioning as the provider of rice production, then the policy of sustainable food farming-land (LP2B) was made which its implementation aims to protect rice-land. In the existing condition, LP2B system was not maximally executed, and it caused a limited extend of rice-land to deliver rice production output. One cause related with the development of settlements and industries due to the effect of Malang City that converted land-function. Location of research focused on 30 villages with direct border with Malang City. Review was conducted to develop a model of relation between farming production output and ecological footprint variables. These variables include rice-land area (X1), built land percentage (X2), and number of farmers (X3). Analysis technique was regression. Result of regression indicated that the model of rice production output Y=-207,983 + 10.246X1. Rice-land area (X1) was the most influential independent variable. It was concluded that of villages directly bordered with Malang City, there were 11 villages with higher production potential because their rice production yield was more than 1,000 tons/year, while 12 villages were threatened with low production output because its rice production yield only attained 500 tons/year. Based on the model and the spatial direction of RTRW, it can be said that the direction for the farming development policy must be redesigned to maintain rice-land area on the regions on which agricultural activity was still dominant. Because rice-land area was the most influential factor to farming production. Therefore, the wider the rice-land is, the higher rice production output is on each village.

  4. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  5. Multi-sensor Cloud Retrieval Simulator and Remote Sensing from Model Parameters . Pt. 1; Synthetic Sensor Radiance Formulation; [Synthetic Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.

    2013-01-01

    In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.

  6. Thermal and optical performance of encapsulation systems for flat-plate photovoltaic modules

    NASA Technical Reports Server (NTRS)

    Minning, C. P.; Coakley, J. F.; Perrygo, C. M.; Garcia, A., III; Cuddihy, E. F.

    1981-01-01

    The electrical power output from a photovoltaic module is strongly influenced by the thermal and optical characteristics of the module encapsulation system. Described are the methodology and computer model for performing fast and accurate thermal and optical evaluations of different encapsulation systems. The computer model is used to evaluate cell temperature, solar energy transmittance through the encapsulation system, and electric power output for operation in a terrestrial environment. Extensive results are presented for both superstrate-module and substrate-module design schemes which include different types of silicon cell materials, pottants, and antireflection coatings.

  7. Real-time simulation of an F110/STOVL turbofan engine

    NASA Technical Reports Server (NTRS)

    Drummond, Colin K.; Ouzts, Peter J.

    1989-01-01

    A traditional F110-type turbofan engine model was extended to include a ventral nozzle and two thrust-augmenting ejectors for Short Take-Off Vertical Landing (STOVL) aircraft applications. Development of the real-time F110/STOVL simulation required special attention to the modeling approach to component performance maps, the low pressure turbine exit mixing region, and the tailpipe dynamic approximation. Simulation validation derives by comparing output from the ADSIM simulation with the output for a validated F110/STOVL General Electric Aircraft Engines FORTRAN deck. General Electric substantiated basic engine component characteristics through factory testing and full scale ejector data.

  8. A space transportation system operations model

    NASA Technical Reports Server (NTRS)

    Morris, W. Douglas; White, Nancy H.

    1987-01-01

    Presented is a description of a computer program which permits assessment of the operational support requirements of space transportation systems functioning in both a ground- and space-based environment. The scenario depicted provides for the delivery of payloads from Earth to a space station and beyond using upper stages based at the station. Model results are scenario dependent and rely on the input definitions of delivery requirements, task times, and available resources. Output is in terms of flight rate capabilities, resource requirements, and facility utilization. A general program description, program listing, input requirements, and sample output are included.

  9. Model-Free Primitive-Based Iterative Learning Control Approach to Trajectory Tracking of MIMO Systems With Experimental Validation.

    PubMed

    Radac, Mircea-Bogdan; Precup, Radu-Emil; Petriu, Emil M

    2015-11-01

    This paper proposes a novel model-free trajectory tracking of multiple-input multiple-output (MIMO) systems by the combination of iterative learning control (ILC) and primitives. The optimal trajectory tracking solution is obtained in terms of previously learned solutions to simple tasks called primitives. The library of primitives that are stored in memory consists of pairs of reference input/controlled output signals. The reference input primitives are optimized in a model-free ILC framework without using knowledge of the controlled process. The guaranteed convergence of the learning scheme is built upon a model-free virtual reference feedback tuning design of the feedback decoupling controller. Each new complex trajectory to be tracked is decomposed into the output primitives regarded as basis functions. The optimal reference input for the control system to track the desired trajectory is next recomposed from the reference input primitives. This is advantageous because the optimal reference input is computed straightforward without the need to learn from repeated executions of the tracking task. In addition, the optimization problem specific to trajectory tracking of square MIMO systems is decomposed in a set of optimization problems assigned to each separate single-input single-output control channel that ensures a convenient model-free decoupling. The new model-free primitive-based ILC approach is capable of planning, reasoning, and learning. A case study dealing with the model-free control tuning for a nonlinear aerodynamic system is included to validate the new approach. The experimental results are given.

  10. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  11. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  12. A Scalable Cloud Library Empowering Big Data Management, Diagnosis, and Visualization of Cloud-Resolving Models

    NASA Astrophysics Data System (ADS)

    Zhou, S.; Tao, W. K.; Li, X.; Matsui, T.; Sun, X. H.; Yang, X.

    2015-12-01

    A cloud-resolving model (CRM) is an atmospheric numerical model that can numerically resolve clouds and cloud systems at 0.25~5km horizontal grid spacings. The main advantage of the CRM is that it can allow explicit interactive processes between microphysics, radiation, turbulence, surface, and aerosols without subgrid cloud fraction, overlapping and convective parameterization. Because of their fine resolution and complex physical processes, it is challenging for the CRM community to i) visualize/inter-compare CRM simulations, ii) diagnose key processes for cloud-precipitation formation and intensity, and iii) evaluate against NASA's field campaign data and L1/L2 satellite data products due to large data volume (~10TB) and complexity of CRM's physical processes. We have been building the Super Cloud Library (SCL) upon a Hadoop framework, capable of CRM database management, distribution, visualization, subsetting, and evaluation in a scalable way. The current SCL capability includes (1) A SCL data model enables various CRM simulation outputs in NetCDF, including the NASA-Unified Weather Research and Forecasting (NU-WRF) and Goddard Cumulus Ensemble (GCE) model, to be accessed and processed by Hadoop, (2) A parallel NetCDF-to-CSV converter supports NU-WRF and GCE model outputs, (3) A technique visualizes Hadoop-resident data with IDL, (4) A technique subsets Hadoop-resident data, compliant to the SCL data model, with HIVE or Impala via HUE's Web interface, (5) A prototype enables a Hadoop MapReduce application to dynamically access and process data residing in a parallel file system, PVFS2 or CephFS, where high performance computing (HPC) simulation outputs such as NU-WRF's and GCE's are located. We are testing Apache Spark to speed up SCL data processing and analysis.With the SCL capabilities, SCL users can conduct large-domain on-demand tasks without downloading voluminous CRM datasets and various observations from NASA Field Campaigns and Satellite data to a local computer, and inter-compare CRM output and data with GCE and NU-WRF.

  13. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain

    PubMed Central

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432

  14. Analysis of inter-country input-output table based on citation network: How to measure the competition and collaboration between industrial sectors on the global value chain.

    PubMed

    Xing, Lizhi

    2017-01-01

    The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.

  15. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, K.L.; Baum, C.C.; Jones, R.D.

    1997-08-19

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data. 46 figs.

  16. Adaptive model predictive process control using neural networks

    DOEpatents

    Buescher, Kevin L.; Baum, Christopher C.; Jones, Roger D.

    1997-01-01

    A control system for controlling the output of at least one plant process output parameter is implemented by adaptive model predictive control using a neural network. An improved method and apparatus provides for sampling plant output and control input at a first sampling rate to provide control inputs at the fast rate. The MPC system is, however, provided with a network state vector that is constructed at a second, slower rate so that the input control values used by the MPC system are averaged over a gapped time period. Another improvement is a provision for on-line training that may include difference training, curvature training, and basis center adjustment to maintain the weights and basis centers of the neural in an updated state that can follow changes in the plant operation apart from initial off-line training data.

  17. A two-stage DEA approach for environmental efficiency measurement.

    PubMed

    Song, Malin; Wang, Shuhong; Liu, Wei

    2014-05-01

    The slacks-based measure (SBM) model based on the constant returns to scale has achieved some good results in addressing the undesirable outputs, such as waste water and water gas, in measuring environmental efficiency. However, the traditional SBM model cannot deal with the scenario in which desirable outputs are constant. Based on the axiomatic theory of productivity, this paper carries out a systematic research on the SBM model considering undesirable outputs, and further expands the SBM model from the perspective of network analysis. The new model can not only perform efficiency evaluation considering undesirable outputs, but also calculate desirable and undesirable outputs separately. The latter advantage successfully solves the "dependence" problem of outputs, that is, we can not increase the desirable outputs without producing any undesirable outputs. The following illustration shows that the efficiency values obtained by two-stage approach are smaller than those obtained by the traditional SBM model. Our approach provides a more profound analysis on how to improve environmental efficiency of the decision making units.

  18. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  19. A Global Repository for Planet-Sized Experiments and Observations

    NASA Technical Reports Server (NTRS)

    Williams, Dean; Balaji, V.; Cinquini, Luca; Denvil, Sebastien; Duffy, Daniel; Evans, Ben; Ferraro, Robert D.; Hansen, Rose; Lautenschlager, Michael; Trenham, Claire

    2016-01-01

    Working across U.S. federal agencies, international agencies, and multiple worldwide data centers, and spanning seven international network organizations, the Earth System Grid Federation (ESGF) allows users to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a system of geographically distributed peer nodes that are independently administered yet united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP) output used by the Intergovernmental Panel on Climate Change assessment reports. Data served by ESGF not only include model output (i.e., CMIP simulation runs) but also include observational data from satellites and instruments, reanalyses, and generated images. Metadata summarize basic information about the data for fast and easy data discovery.

  20. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  1. Climate Science's Globally Distributed Infrastructure

    NASA Astrophysics Data System (ADS)

    Williams, D. N.

    2016-12-01

    The Earth System Grid Federation (ESGF) is primarily funded by the Department of Energy's (DOE's) Office of Science (the Office of Biological and Environmental Research [BER] Climate Data Informatics Program and the Office of Advanced Scientific Computing Research Next Generation Network for Science Program), the National Oceanic and Atmospheric Administration (NOAA), the National Aeronautics and Space Administration (NASA), and the National Science Foundation (NSF), the European Infrastructure for the European Network for Earth System Modeling (IS-ENES), and the Australian National University (ANU). Support also comes from other U.S. federal and international agencies. The federation works across multiple worldwide data centers and spans seven international network organizations to provide users with the ability to access, analyze, and visualize data using a globally federated collection of networks, computers, and software. Its architecture employs a series of geographically distributed peer nodes that are independently administered and united by common federation protocols and application programming interfaces (APIs). The full ESGF infrastructure has now been adopted by multiple Earth science projects and allows access to petabytes of geophysical data, including the Coupled Model Intercomparison Project (CMIP; output used by the Intergovernmental Panel on Climate Change assessment reports), multiple model intercomparison projects (MIPs; endorsed by the World Climate Research Programme [WCRP]), and the Accelerated Climate Modeling for Energy (ACME; ESGF is included in the overarching ACME workflow process to store model output). ESGF is a successful example of integration of disparate open-source technologies into a cohesive functional system that serves the needs the global climate science community. Data served by ESGF includes not only model output but also observational data from satellites and instruments, reanalysis, and generated images.

  2. The first of a series of high efficiency, high bmep, turbocharged two-stroke cycle diesel engines; the general motors EMD 645FB engine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kotlin, J.J.; Dunteman, N.R.; Scott, D.I.

    1983-01-01

    The current Electro-Motive Division 645 Series turbocharged engines are the Model FB and EC. The FB engine combines the highest thermal efficiency with the highest specific output of any EMD engine to date. The FB Series incorporates 16:1 compression ratio with a fire ring piston and an improved turbocharger design. Engine components included in the FB engine provide very high output levels with exceptional reliability. This paper also describes the performance of the lower rated Model EC engine series which feature high thermal efficiency and utilize many engine components well proven in service and basic to the Model FB Series.

  3. Evaluating Multi-Input/Multi-Output Digital Control Systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.; Wieseman, Carol D.; Hoadley, Sherwood T.; Mukhopadhyay, Vivek

    1994-01-01

    Controller-performance-evaluation (CPE) methodology for multi-input/multi-output (MIMO) digital control systems developed. Procedures identify potentially destabilizing controllers and confirm satisfactory performance of stabilizing ones. Methodology generic and used in many types of multi-loop digital-controller applications, including digital flight-control systems, digitally controlled spacecraft structures, and actively controlled wind-tunnel models. Also applicable to other complex, highly dynamic digital controllers, such as those in high-performance robot systems.

  4. Limitations of JEDI Models | Jobs and Economic Development Impact Models |

    Science.gov Websites

    precise forecast. The Jobs and Economic Development Impact (JEDI) models are input-output based models for assessing economic impacts and jobs, including JEDI (see Chapter 5, pp. 136-142). The most not reflect many other economic impacts that could affect real-world impacts on jobs from the project

  5. A global sensitivity analysis approach for morphogenesis models.

    PubMed

    Boas, Sonja E M; Navarro Jimenez, Maria I; Merks, Roeland M H; Blom, Joke G

    2015-11-21

    Morphogenesis is a developmental process in which cells organize into shapes and patterns. Complex, non-linear and multi-factorial models with images as output are commonly used to study morphogenesis. It is difficult to understand the relation between the uncertainty in the input and the output of such 'black-box' models, giving rise to the need for sensitivity analysis tools. In this paper, we introduce a workflow for a global sensitivity analysis approach to study the impact of single parameters and the interactions between them on the output of morphogenesis models. To demonstrate the workflow, we used a published, well-studied model of vascular morphogenesis. The parameters of this cellular Potts model (CPM) represent cell properties and behaviors that drive the mechanisms of angiogenic sprouting. The global sensitivity analysis correctly identified the dominant parameters in the model, consistent with previous studies. Additionally, the analysis provided information on the relative impact of single parameters and of interactions between them. This is very relevant because interactions of parameters impede the experimental verification of the predicted effect of single parameters. The parameter interactions, although of low impact, provided also new insights in the mechanisms of in silico sprouting. Finally, the analysis indicated that the model could be reduced by one parameter. We propose global sensitivity analysis as an alternative approach to study the mechanisms of morphogenesis. Comparison of the ranking of the impact of the model parameters to knowledge derived from experimental data and from manipulation experiments can help to falsify models and to find the operand mechanisms in morphogenesis. The workflow is applicable to all 'black-box' models, including high-throughput in vitro models in which output measures are affected by a set of experimental perturbations.

  6. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  7. Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001): Users Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; Johnson, D. L.

    2001-01-01

    This document presents Mars Global Reference Atmospheric Model 2001 Version (Mars-GRAM 2001) and its new features. As with the previous version (mars-2000), all parameterizations fro temperature, pressure, density, and winds versus height, latitude, longitude, time of day, and season (Ls) use input data tables from NASA Ames Mars General Circulation Model (MGCM) for the surface through 80-km altitude and the University of Arizona Mars Thermospheric General Circulation Model (MTGCM) for 80 to 70 km. Mars-GRAM 2001 is based on topography from the Mars Orbiter Laser Altimeter (MOLA) and includes new MGCM data at the topographic surface. A new auxiliary program allows Mars-GRAM output to be used to compute shortwave (solar) and longwave (thermal) radiation at the surface and top of atmosphere. This memorandum includes instructions on obtaining Mars-GRAN source code and data files and for running the program. It also provides sample input and output and an example for incorporating Mars-GRAM as an atmospheric subroutine in a trajectory code.

  8. Math Model for Naval Ship Handling Trainer.

    ERIC Educational Resources Information Center

    Golovcsenko, Igor V.

    The report describes the math model for an experimental ship handling trainer. The training task is that of a replenishment operation at sea. The model includes equations for ship dynamics of a destroyer, propeller-engine response times, ship separation, interaction effects between supply ship and destroyer, and outputs to a visual display system.…

  9. An enhanced archive facilitating climate impacts analysis

    USGS Publications Warehouse

    Maurer, E.P.; Brekke, L.; Pruitt, T.; Thrasher, B.; Long, J.; Duffy, P.; Dettinger, M.; Cayan, D.; Arnold, J.

    2014-01-01

    We describe the expansion of a publicly available archive of downscaled climate and hydrology projections for the United States. Those studying or planning to adapt to future climate impacts demand downscaled climate model output for local or regional use. The archive we describe attempts to fulfill this need by providing data in several formats, selectable to meet user needs. Our archive has served as a resource for climate impacts modelers, water managers, educators, and others. Over 1,400 individuals have transferred more than 50 TB of data from the archive. In response to user demands, the archive has expanded from monthly downscaled data to include daily data to facilitate investigations of phenomena sensitive to daily to monthly temperature and precipitation, including extremes in these quantities. New developments include downscaled output from the new Coupled Model Intercomparison Project phase 5 (CMIP5) climate model simulations at both the monthly and daily time scales, as well as simulations of surface hydrologi- cal variables. The web interface allows the extraction of individual projections or ensemble statistics for user-defined regions, promoting the rapid assessment of model consensus and uncertainty for future projections of precipitation, temperature, and hydrology. The archive is accessible online (http://gdo-dcp.ucllnl.org/downscaled_ cmip_projections).

  10. A warm-season comparison of WRF coupled to the CLM4.0, Noah-MP, and Bucket hydrology land surface schemes over the central USA

    NASA Astrophysics Data System (ADS)

    Van Den Broeke, Matthew S.; Kalin, Andrew; Alavez, Jose Abraham Torres; Oglesby, Robert; Hu, Qi

    2017-11-01

    In climate modeling studies, there is a need to choose a suitable land surface model (LSM) while adhering to available resources. In this study, the viability of three LSM options (Community Land Model version 4.0 [CLM4.0], Noah-MP, and the five-layer thermal diffusion [Bucket] scheme) in the Weather Research and Forecasting model version 3.6 (WRF3.6) was examined for the warm season in a domain centered on the central USA. Model output was compared to Parameter-elevation Relationships on Independent Slopes Model (PRISM) data, a gridded observational dataset including mean monthly temperature and total monthly precipitation. Model output temperature, precipitation, latent heat (LH) flux, sensible heat (SH) flux, and soil water content (SWC) were compared to observations from sites in the Central and Southern Great Plains region. An overall warm bias was found in CLM4.0 and Noah-MP, with a cool bias of larger magnitude in the Bucket model. These three LSMs produced similar patterns of wet and dry biases. Model output of SWC and LH/SH fluxes were compared to observations, and did not show a consistent bias. Both sophisticated LSMs appear to be viable options for simulating the effects of land use change in the central USA.

  11. A diagnostic interface for the ICOsahedral Non-hydrostatic (ICON) modelling framework based on the Modular Earth Submodel System (MESSy v2.50)

    NASA Astrophysics Data System (ADS)

    Kern, Bastian; Jöckel, Patrick

    2016-10-01

    Numerical climate and weather models have advanced to finer scales, accompanied by large amounts of output data. The model systems hit the input and output (I/O) bottleneck of modern high-performance computing (HPC) systems. We aim to apply diagnostic methods online during the model simulation instead of applying them as a post-processing step to written output data, to reduce the amount of I/O. To include diagnostic tools into the model system, we implemented a standardised, easy-to-use interface based on the Modular Earth Submodel System (MESSy) into the ICOsahedral Non-hydrostatic (ICON) modelling framework. The integration of the diagnostic interface into the model system is briefly described. Furthermore, we present a prototype implementation of an advanced online diagnostic tool for the aggregation of model data onto a user-defined regular coarse grid. This diagnostic tool will be used to reduce the amount of model output in future simulations. Performance tests of the interface and of two different diagnostic tools show, that the interface itself introduces no overhead in form of additional runtime to the model system. The diagnostic tools, however, have significant impact on the model system's runtime. This overhead strongly depends on the characteristics and implementation of the diagnostic tool. A diagnostic tool with high inter-process communication introduces large overhead, whereas the additional runtime of a diagnostic tool without inter-process communication is low. We briefly describe our efforts to reduce the additional runtime from the diagnostic tools, and present a brief analysis of memory consumption. Future work will focus on optimisation of the memory footprint and the I/O operations of the diagnostic interface.

  12. Forecast of long term coal supply and mining conditions: Model documentation and results

    NASA Technical Reports Server (NTRS)

    1980-01-01

    A coal industry model was developed to support the Jet Propulsion Laboratory in its investigation of advanced underground coal extraction systems. The model documentation includes the programming for the coal mining cost models and an accompanying users' manual, and a guide to reading model output. The methodology used in assembling the transportation, demand, and coal reserve components of the model are also described. Results presented for 1986 and 2000, include projections of coal production patterns and marginal prices, differentiated by coal sulfur content.

  13. Bayesian model calibration of computational models in velocimetry diagnosed dynamic compression experiments.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Justin; Hund, Lauren

    2017-02-01

    Dynamic compression experiments are being performed on complicated materials using increasingly complex drivers. The data produced in these experiments are beginning to reach a regime where traditional analysis techniques break down; requiring the solution of an inverse problem. A common measurement in dynamic experiments is an interface velocity as a function of time, and often this functional output can be simulated using a hydrodynamics code. Bayesian model calibration is a statistical framework to estimate inputs into a computational model in the presence of multiple uncertainties, making it well suited to measurements of this type. In this article, we apply Bayesianmore » model calibration to high pressure (250 GPa) ramp compression measurements in tantalum. We address several issues speci c to this calibration including the functional nature of the output as well as parameter and model discrepancy identi ability. Speci cally, we propose scaling the likelihood function by an e ective sample size rather than modeling the autocorrelation function to accommodate the functional output and propose sensitivity analyses using the notion of `modularization' to assess the impact of experiment-speci c nuisance input parameters on estimates of material properties. We conclude that the proposed Bayesian model calibration procedure results in simple, fast, and valid inferences on the equation of state parameters for tantalum.« less

  14. USEEIO: a New and Transparent United States ...

    EPA Pesticide Factsheets

    National-scope environmental life cycle models of goods and services may be used for many purposes, not limited to quantifying impacts of production and consumption of nations, assessing organization-wide impacts, identifying purchasing hot spots, analyzing environmental impacts of policies, and performing streamlined life cycle assessment. USEEIO is a new environmentally extended input-output model of the United States fit for such purposes and other sustainable materials management applications. USEEIO melds data on economic transactions between 389 industry sectors with environmental data for these sectors covering land, water, energy and mineral usage and emissions of greenhouse gases, criteria air pollutants, nutrients and toxics, to build a life cycle model of 385 US goods and services. In comparison with existing US input-output models, USEEIO is more current with most data representing year 2013, more extensive in its coverage of resources and emissions, more deliberate and detailed in its interpretation and combination of data sources, and includes formal data quality evaluation and description. USEEIO was assembled with a new Python module called the IO Model Builder capable of assembling and calculating results of user-defined input-output models and exporting the models into LCA software. The model and data quality evaluation capabilities are demonstrated with an analysis of the environmental performance of an average hospital in the US. All USEEIO f

  15. Aeroelastic Modeling of X-56A Stiff-Wing Configuration Flight Test Data

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Boucher, Matthew J.

    2017-01-01

    Aeroelastic stability and control derivatives for the X-56A Multi-Utility Technology Testbed (MUTT), in the stiff-wing configuration, were estimated from flight test data using the output-error method. Practical aspects of the analysis are discussed. The orthogonal phase-optimized multisine inputs provided excellent data information for aeroelastic modeling. Consistent parameter estimates were determined using output error in both the frequency and time domains. The frequency domain analysis converged faster and was less sensitive to starting values for the model parameters, which was useful for determining the aeroelastic model structure and obtaining starting values for the time domain analysis. Including a modal description of the structure from a finite element model reduced the complexity of the estimation problem and improved the modeling results. Effects of reducing the model order on the short period stability and control derivatives were investigated.

  16. A Deterministic Interfacial Cyclic Oxidation Spalling Model. Part 1; Model Development and Parametric Response

    NASA Technical Reports Server (NTRS)

    Smialek, James L.

    2002-01-01

    An equation has been developed to model the iterative scale growth and spalling process that occurs during cyclic oxidation of high temperature materials. Parabolic scale growth and spalling of a constant surface area fraction have been assumed. Interfacial spallation of the only the thickest segments was also postulated. This simplicity allowed for representation by a simple deterministic summation series. Inputs are the parabolic growth rate constant, the spall area fraction, oxide stoichiometry, and cycle duration. Outputs include the net weight change behavior, as well as the total amount of oxygen and metal consumed, the total amount of oxide spalled, and the mass fraction of oxide spalled. The outputs all follow typical well-behaved trends with the inputs and are in good agreement with previous interfacial models.

  17. Fallon, Nevada FORGE Thermal-Hydrological-Mechanical Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blankenship, Doug; Sonnenthal, Eric

    Archive contains thermal-mechanical simulation input/output files. Included are files which fall into the following categories: ( 1 ) Spreadsheets with various input parameter calculations ( 2 ) Final Simulation Inputs ( 3 ) Native-State Thermal-Hydrological Model Input File Folders ( 4 ) Native-State Thermal-Hydrological-Mechanical Model Input Files ( 5 ) THM Model Stimulation Cases See 'File Descriptions.xlsx' resource below for additional information on individual files.

  18. Accessing National Water Model Output for Research and Application: An R package

    NASA Astrophysics Data System (ADS)

    Johnson, M.; Coll, J.

    2017-12-01

    With the National Water Model becoming operational in August of 2016, the need for a open source way to translate a huge amount of data into actionable intelligence and innovative research is apparent. The first step in doing this is to provide a package for accessing, managing, and writing data in a way that is both interpretable, portable, and useful to the end user in both the R environment, and other applications. This can be as simple as subsetting the outputs and writing to a CSV, but can also include converting discharge output to more meaningful statistics and measurements, and methods to visualize data in ways that are meaningful to a wider audience. The NWM R package presented here aims to serve this need through a suite of functions fit for researchers, first responders, and average citizens. A vignette of how this package can be applied to real-time flood mapping will be demonstrated.

  19. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  20. The temporal representation of speech in a nonlinear model of the guinea pig cochlea

    NASA Astrophysics Data System (ADS)

    Holmes, Stephen D.; Sumner, Christian J.; O'Mard, Lowel P.; Meddis, Ray

    2004-12-01

    The temporal representation of speechlike stimuli in the auditory-nerve output of a guinea pig cochlea model is described. The model consists of a bank of dual resonance nonlinear filters that simulate the vibratory response of the basilar membrane followed by a model of the inner hair cell/auditory nerve complex. The model is evaluated by comparing its output with published physiological auditory nerve data in response to single and double vowels. The evaluation includes analyses of individual fibers, as well as ensemble responses over a wide range of best frequencies. In all cases the model response closely follows the patterns in the physiological data, particularly the tendency for the temporal firing pattern of each fiber to represent the frequency of a nearby formant of the speech sound. In the model this behavior is largely a consequence of filter shapes; nonlinear filtering has only a small contribution at low frequencies. The guinea pig cochlear model produces a useful simulation of the measured physiological response to simple speech sounds and is therefore suitable for use in more advanced applications including attempts to generalize these principles to the response of human auditory system, both normal and impaired. .

  1. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  2. Identification of Low Order Equivalent System Models From Flight Test Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2000-01-01

    Identification of low order equivalent system dynamic models from flight test data was studied. Inputs were pilot control deflections, and outputs were aircraft responses, so the models characterized the total aircraft response including bare airframe and flight control system. Theoretical investigations were conducted and related to results found in the literature. Low order equivalent system modeling techniques using output error and equation error parameter estimation in the frequency domain were developed and validated on simulation data. It was found that some common difficulties encountered in identifying closed loop low order equivalent system models from flight test data could be overcome using the developed techniques. Implications for data requirements and experiment design were discussed. The developed methods were demonstrated using realistic simulation cases, then applied to closed loop flight test data from the NASA F-18 High Alpha Research Vehicle.

  3. Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing

    NASA Astrophysics Data System (ADS)

    Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will

    2018-05-01

    Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.

  4. Power converter using near-load output capacitance, direct inductor contact, and/or remote current sense

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coteus, Paul W.; Ferencz, Andrew; Hall, Shawn A.

    An apparatus includes a first circuit board including first components including a load, and a second circuit board including second components including switching power devices and an output inductor. Ground and output voltage contacts between the circuit boards are made through soldered or connectorized interfaces. Certain components on the first circuit board and certain components, including the output inductor, on the second circuit board act as a DC-DC voltage converter for the load. An output capacitance for the conversion is on the first circuit board with no board-to-board interface between the output capacitance and the load. The inductance of themore » board-to-board interface functions as part of the output inductor's inductance and not as a parasitic inductance. Sense components for sensing current through the output inductor are located on the first circuit board. Parasitic inductance of the board-to-board interface has less effect on a sense signal provided to a controller.« less

  5. Venus Global Reference Atmospheric Model

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.

    2017-01-01

    Venus Global Reference Atmospheric Model (Venus-GRAM) is an engineering-level atmospheric model developed by MSFC that is widely used for diverse mission applications including: Systems design; Performance analysis; Operations planning for aerobraking, Entry, Descent and Landing, and aerocapture; Is not a forecast model; Outputs include density, temperature, pressure, wind components, and chemical composition; Provides dispersions of thermodynamic parameters, winds, and density; Optional trajectory and auxiliary profile input files Has been used in multiple studies and proposals including NASA Engineering and Safety Center (NESC) Autonomous Aerobraking and various Discovery proposals; Released in 2005; Available at: https://software.nasa.gov/software/MFS-32314-1.

  6. User's Guide To CHEAP0 II-Economic Analysis of Stand Prognosis Model Outputs

    Treesearch

    Joseph E. Horn; E. Lee Medema; Ervin G. Schuster

    1986-01-01

    CHEAP0 II provides supplemental economic analysis capability for users of version 5.1 of the Stand Prognosis Model, including recent regeneration and insect outbreak extensions. Although patterned after the old CHEAP0 model, CHEAP0 II has more features and analytic capabilities, especially for analysis of existing and uneven-aged stands....

  7. Climate Model Ensemble Methodology: Rationale and Challenges

    NASA Astrophysics Data System (ADS)

    Vezer, M. A.; Myrvold, W.

    2012-12-01

    A tractable model of the Earth's atmosphere, or, indeed, any large, complex system, is inevitably unrealistic in a variety of ways. This will have an effect on the model's output. Nonetheless, we want to be able to rely on certain features of the model's output in studies aiming to detect, attribute, and project climate change. For this, we need assurance that these features reflect the target system, and are not artifacts of the unrealistic assumptions that go into the model. One technique for overcoming these limitations is to study ensembles of models which employ different simplifying assumptions and different methods of modelling. One then either takes as reliable certain outputs on which models in the ensemble agree, or takes the average of these outputs as the best estimate. Since the Intergovernmental Panel on Climate Change's Fourth Assessment Report (IPCC AR4) modellers have aimed to improve ensemble analysis by developing techniques to account for dependencies among models, and to ascribe unequal weights to models according to their performance. The goal of this paper is to present as clearly and cogently as possible the rationale for climate model ensemble methodology, the motivation of modellers to account for model dependencies, and their efforts to ascribe unequal weights to models. The method of our analysis is as follows. We will consider a simpler, well-understood case of taking the mean of a number of measurements of some quantity. Contrary to what is sometimes said, it is not a requirement of this practice that the errors of the component measurements be independent; one must, however, compensate for any lack of independence. We will also extend the usual accounts to include cases of unknown systematic error. We draw parallels between this simpler illustration and the more complex example of climate model ensembles, detailing how ensembles can provide more useful information than any of their constituent models. This account emphasizes the epistemic importance of considering degrees of model dependence, and the practice of ascribing unequal weights to models of unequal skill.

  8. Numerical simulation of a battlefield Nd:YAG laser

    NASA Astrophysics Data System (ADS)

    Henriksson, Markus; Sjoqvist, Lars; Uhrwing, Thomas

    2005-11-01

    A numeric model has been developed to identify the critical components and parameters in improving the output beam quality of a flashlamp pumped Q-switched Nd:YAG laser with a folded Porro-prism resonator and polarization output coupling. The heating of the laser material and accompanying thermo-optical effects are calculated using the finite element partial differential equations package FEMLAB allowing arbitrary geometries and time distributions. The laser gain and the cavity are modeled with the physical optics simulation code GLAD including effects such as gain profile, thermal lensing and stress-induced birefringence, the Pockels cell rise-time and component aberrations. The model is intended to optimize the pumping process of an OPO providing radiation to be used for ranging, imaging or optical countermeasures.

  9. Solid rocket booster thermal radiation model. Volume 2: User's manual

    NASA Technical Reports Server (NTRS)

    Lee, A. L.

    1976-01-01

    A user's manual was prepared for the computer program of a solid rocket booster (SRB) thermal radiation model. The following information was included: (1) structure of the program, (2) input information required, (3) examples of input cards and output printout, (4) program characteristics, and (5) program listing.

  10. Stimulation at Desert Peak -modeling with the coupled THM code FEHM

    DOE Data Explorer

    kelkar, sharad

    2013-04-30

    Numerical modeling of the 2011 shear stimulation at the Desert Peak well 27-15. This submission contains the FEHM executable code for a 64-bit PC Windows-7 machine, and the input and output files for the results presented in the included paper from ARMA-213 meeting.

  11. Robust adaptive controller design for a class of uncertain nonlinear systems using online T-S fuzzy-neural modeling approach.

    PubMed

    Chien, Yi-Hsing; Wang, Wei-Yen; Leu, Yih-Guang; Lee, Tsu-Tian

    2011-04-01

    This paper proposes a novel method of online modeling and control via the Takagi-Sugeno (T-S) fuzzy-neural model for a class of uncertain nonlinear systems with some kinds of outputs. Although studies about adaptive T-S fuzzy-neural controllers have been made on some nonaffine nonlinear systems, little is known about the more complicated uncertain nonlinear systems. Because the nonlinear functions of the systems are uncertain, traditional T-S fuzzy control methods can model and control them only with great difficulty, if at all. Instead of modeling these uncertain functions directly, we propose that a T-S fuzzy-neural model approximates a so-called virtual linearized system (VLS) of the system, which includes modeling errors and external disturbances. We also propose an online identification algorithm for the VLS and put significant emphasis on robust tracking controller design using an adaptive scheme for the uncertain systems. Moreover, the stability of the closed-loop systems is proven by using strictly positive real Lyapunov theory. The proposed overall scheme guarantees that the outputs of the closed-loop systems asymptotically track the desired output trajectories. To illustrate the effectiveness and applicability of the proposed method, simulation results are given in this paper.

  12. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  13. Trade Agreements: Impact on the U.S. Economy

    DTIC Science & Technology

    2009-11-10

    model is consistent with the Ricardian and Heckscher- Ohlin models . An important drawback of the model is that it can estimate only the aggregate...24 Now known as the Michigan Brown-Deardorff-Stern Model , the Michigan Model of World Production and Trade includes data on 29...economy in the model . Input- output accounts trace the flow of input commodities into the production processes of industries, the flow of intermediate

  14. Agreement Between Institutional Measurements and Treatment Planning System Calculations for Basic Dosimetric Parameters as Measured by the Imaging and Radiation Oncology Core-Houston

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kerns, James R.; Followill, David S.; Imaging and Radiation Oncology Core-Houston, The University of Texas Health Science Center-Houston, Houston, Texas

    Purpose: To compare radiation machine measurement data collected by the Imaging and Radiation Oncology Core at Houston (IROC-H) with institutional treatment planning system (TPS) values, to identify parameters with large differences in agreement; the findings will help institutions focus their efforts to improve the accuracy of their TPS models. Methods and Materials: Between 2000 and 2014, IROC-H visited more than 250 institutions and conducted independent measurements of machine dosimetric data points, including percentage depth dose, output factors, off-axis factors, multileaf collimator small fields, and wedge data. We compared these data with the institutional TPS values for the same points bymore » energy, class, and parameter to identify differences and similarities using criteria involving both the medians and standard deviations for Varian linear accelerators. Distributions of differences between machine measurements and institutional TPS values were generated for basic dosimetric parameters. Results: On average, intensity modulated radiation therapy–style and stereotactic body radiation therapy–style output factors and upper physical wedge output factors were the most problematic. Percentage depth dose, jaw output factors, and enhanced dynamic wedge output factors agreed best between the IROC-H measurements and the TPS values. Although small differences were shown between 2 common TPS systems, neither was superior to the other. Parameter agreement was constant over time from 2000 to 2014. Conclusions: Differences in basic dosimetric parameters between machine measurements and TPS values vary widely depending on the parameter, although agreement does not seem to vary by TPS and has not changed over time. Intensity modulated radiation therapy–style output factors, stereotactic body radiation therapy–style output factors, and upper physical wedge output factors had the largest disagreement and should be carefully modeled to ensure accuracy.« less

  15. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  16. Projecting climate change impacts on hydrology: the potential role of daily GCM output

    NASA Astrophysics Data System (ADS)

    Maurer, E. P.; Hidalgo, H. G.; Das, T.; Dettinger, M. D.; Cayan, D.

    2008-12-01

    A primary challenge facing resource managers in accommodating climate change is determining the range and uncertainty in regional and local climate projections. This is especially important for assessing changes in extreme events, which will drive many of the more severe impacts of a changed climate. Since global climate models (GCMs) produce output at a spatial scale incompatible with local impact assessment, different techniques have evolved to downscale GCM output so locally important climate features are expressed in the projections. We compared skill and hydrologic projections using two statistical downscaling methods and a distributed hydrology model. The downscaling methods are the constructed analogues (CA) and the bias correction and spatial downscaling (BCSD). CA uses daily GCM output, and can thus capture GCM projections for changing extreme event occurrence, while BCSD uses monthly output and statistically generates historical daily sequences. We evaluate the hydrologic impacts projected using downscaled climate (from the NCEP/NCAR reanalysis as a surrogate GCM) for the late 20th century with both methods, comparing skill in projecting soil moisture, snow pack, and streamflow at key locations in the Western United States. We include an assessment of a new method for correcting for GCM biases in a hybrid method combining the most important characteristics of both methods.

  17. NASA AVOSS Fast-Time Models for Aircraft Wake Prediction: User's Guide (APA3.8 and TDP2.1)

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew J.; Limon Duparcmeur, Fanny M.

    2016-01-01

    NASA's current distribution of fast-time wake vortex decay and transport models includes APA (Version 3.8) and TDP (Version 2.1). This User's Guide provides detailed information on the model inputs, file formats, and model outputs. A brief description of the Memphis 1995, Dallas/Fort Worth 1997, and the Denver 2003 wake vortex datasets is given along with the evaluation of models. A detailed bibliography is provided which includes publications on model development, wake field experiment descriptions, and applications of the fast-time wake vortex models.

  18. Scientific Benefits of Space Science Models Archiving at Community Coordinated Modeling Center

    NASA Technical Reports Server (NTRS)

    Kuznetsova, Maria M.; Berrios, David; Chulaki, Anna; Hesse, Michael; MacNeice, Peter J.; Maddox, Marlo M.; Pulkkinen, Antti; Rastaetter, Lutz; Taktakishvili, Aleksandre

    2009-01-01

    The Community Coordinated Modeling Center (CCMC) hosts a set of state-of-the-art space science models ranging from the solar atmosphere to the Earth's upper atmosphere. CCMC provides a web-based Run-on-Request system, by which the interested scientist can request simulations for a broad range of space science problems. To allow the models to be driven by data relevant to particular events CCMC developed a tool that automatically downloads data from data archives and transform them to required formats. CCMC also provides a tailored web-based visualization interface for the model output, as well as the capability to download the simulation output in portable format. CCMC offers a variety of visualization and output analysis tools to aid scientists in interpretation of simulation results. During eight years since the Run-on-request system became available the CCMC archived the results of almost 3000 runs that are covering significant space weather events and time intervals of interest identified by the community. The simulation results archived at CCMC also include a library of general purpose runs with modeled conditions that are used for education and research. Archiving results of simulations performed in support of several Modeling Challenges helps to evaluate the progress in space weather modeling over time. We will highlight the scientific benefits of CCMC space science model archive and discuss plans for further development of advanced methods to interact with simulation results.

  19. Web-based emergency response exercise management systems and methods thereof

    DOEpatents

    Goforth, John W.; Mercer, Michael B.; Heath, Zach; Yang, Lynn I.

    2014-09-09

    According to one embodiment, a method for simulating portions of an emergency response exercise includes generating situational awareness outputs associated with a simulated emergency and sending the situational awareness outputs to a plurality of output devices. Also, the method includes outputting to a user device a plurality of decisions associated with the situational awareness outputs at a decision point, receiving a selection of one of the decisions from the user device, generating new situational awareness outputs based on the selected decision, and repeating the sending, outputting and receiving steps based on the new situational awareness outputs. Other methods, systems, and computer program products are included according to other embodiments of the invention.

  20. BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.

  1. A mixed-unit input-output model for environmental life-cycle assessment and material flow analysis.

    PubMed

    Hawkins, Troy; Hendrickson, Chris; Higgins, Cortney; Matthews, H Scott; Suh, Sangwon

    2007-02-01

    Materials flow analysis models have traditionally been used to track the production, use, and consumption of materials. Economic input-output modeling has been used for environmental systems analysis, with a primary benefit being the capability to estimate direct and indirect economic and environmental impacts across the entire supply chain of production in an economy. We combine these two types of models to create a mixed-unit input-output model that is able to bettertrack economic transactions and material flows throughout the economy associated with changes in production. A 13 by 13 economic input-output direct requirements matrix developed by the U.S. Bureau of Economic Analysis is augmented with material flow data derived from those published by the U.S. Geological Survey in the formulation of illustrative mixed-unit input-output models for lead and cadmium. The resulting model provides the capabilities of both material flow and input-output models, with detailed material tracking through entire supply chains in response to any monetary or material demand. Examples of these models are provided along with a discussion of uncertainty and extensions to these models.

  2. Modeling of a multileaf collimator

    NASA Astrophysics Data System (ADS)

    Kim, Siyong

    A comprehensive physics model of a multileaf collimator (MLC) field for treatment planning was developed. Specifically, an MLC user interface module that includes a geometric optimization tool and a general method of in- air output factor calculation were developed. An automatic tool for optimization of MLC conformation is needed to realize the potential benefits of MLC. It is also necessary that a radiation therapy treatment planning (RTTP) system is capable of modeling MLC completely. An MLC geometric optimization and user interface module was developed. The planning time has been reduced significantly by incorporating the MLC module into the main RTTP system, Radiation Oncology Computer System (ROCS). The dosimetric parameter that has the most profound effect on the accuracy of the dose delivered with an MLC is the change in the in-air output factor that occurs with field shaping. It has been reported that the conventional method of calculating an in-air output factor cannot be used for MLC shaped fields accurately. Therefore, it is necessary to develop algorithms that allow accurate calculation of the in-air output factor. A generalized solution for an in-air output factor calculation was developed. Three major contributors of scatter to the in-air output-flattening filter, wedge, and tertiary collimator-were considered separately. By virtue of a field mapping method, in which a source plane field determined by detector's eye view is mapped into a detector plane field, no additional dosimetric data acquisition other than the standard data set for a range of square fields is required for the calculation of head scatter. Comparisons of in-air output factors between calculated and measured values show a good agreement for both open and wedge fields. For rectangular fields, a simple equivalent square formula was derived based on the configuration of a linear accelerator treatment head. This method predicts in-air output to within 1% accuracy. A two-effective-source algorithm was developed to account for the effect of source to detector distance on in-air output for wedge fields. Two effective sources, one for head scatter and the other for wedge scatter, were dealt with independently. Calculations provided less than 1% difference of in-air output factors from measurements. This approach offers the best comprehensive accuracy in radiation delivery with field shapes defined using MLC. This generalized model works equally well with fields shaped by any type of tertiary collimator and have the necessary framework to extend its application to intensity modulated radiation therapy.

  3. Rocketdyne/Westinghouse nuclear thermal rocket engine modeling

    NASA Technical Reports Server (NTRS)

    Glass, James F.

    1993-01-01

    The topics are presented in viewgraph form and include the following: systems approach needed for nuclear thermal rocket (NTR) design optimization; generic NTR engine power balance codes; rocketdyne nuclear thermal system code; software capabilities; steady state model; NTR engine optimizer code-logic; reactor power calculation logic; sample multi-component configuration; NTR design code output; generic NTR code at Rocketdyne; Rocketdyne NTR model; and nuclear thermal rocket modeling directions.

  4. A note on scrap in the 1992 U.S. input-output tables

    USGS Publications Warehouse

    Swisko, George M.

    2000-01-01

    Introduction A key concern of industrial ecology and life cycle analysis is the disposal and recycling of scrap. One might conclude that the U.S. input-output tables are appropriate tools for analyzing scrap flows. Duchin, for instance, has suggested using input-output analysis for industrial ecology, indicating that input-output economics can trace the stocks and flows of energy and other materials from extraction through production and consumption to recycling or disposal. Lave and others use input-output tables to design life cycle assessment models for studying product design, materials use, and recycling strategies, even with the knowledge that these tables suffer from a lack of comprehensive and detailed data that may never be resolved. Although input-output tables can offer general guidance about the interdependence of economic and environmental processes, data reporting by industry and the economic concepts underlying these tables pose problems for rigorous material flow examinations. This is especially true for analyzing the output of scrap and scrap flows in the United States and estimating the amount of scrap that can be recycled. To show how data reporting has affected the values of scrap in recent input-output tables, this paper focuses on metal scrap generated in manufacturing. The paper also briefly discusses scrap that is not included in the input-output tables and some economic concepts that limit the analysis of scrap flows.

  5. Evaluation of input output efficiency of oil field considering undesirable output —A case study of sandstone reservoir in Xinjiang oilfield

    NASA Astrophysics Data System (ADS)

    Zhang, Shuying; Wu, Xuquan; Li, Deshan; Xu, Yadong; Song, Shulin

    2017-06-01

    Based on the input and output data of sandstone reservoir in Xinjiang oilfield, the SBM-Undesirable model is used to study the technical efficiency of each block. Results show that: the model of SBM-undesirable to evaluate its efficiency and to avoid defects caused by traditional DEA model radial angle, improve the accuracy of the efficiency evaluation. by analyzing the projection of the oil blocks, we find that each block is in the negative external effects of input redundancy and output deficiency benefit and undesirable output, and there are greater differences in the production efficiency of each block; the way to improve the input-output efficiency of oilfield is to optimize the allocation of resources, reduce the undesirable output and increase the expected output.

  6. When causality does not imply correlation: more spadework at the foundations of scientific psychology.

    PubMed

    Marken, Richard S; Horth, Brittany

    2011-06-01

    Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.

  7. Two-Speed Gearbox Dynamic Simulation Predictions and Test Validation

    NASA Technical Reports Server (NTRS)

    Lewicki, David G.; DeSmidt, Hans; Smith, Edward C.; Bauman, Steven W.

    2010-01-01

    Dynamic simulations and experimental validation tests were performed on a two-stage, two-speed gearbox as part of the drive system research activities of the NASA Fundamental Aeronautics Subsonics Rotary Wing Project. The gearbox was driven by two electromagnetic motors and had two electromagnetic, multi-disk clutches to control output speed. A dynamic model of the system was created which included a direct current electric motor with proportional-integral-derivative (PID) speed control, a two-speed gearbox with dual electromagnetically actuated clutches, and an eddy current dynamometer. A six degree-of-freedom model of the gearbox accounted for the system torsional dynamics and included gear, clutch, shaft, and load inertias as well as shaft flexibilities and a dry clutch stick-slip friction model. Experimental validation tests were performed on the gearbox in the NASA Glenn gear noise test facility. Gearbox output speed and torque as well as drive motor speed and current were compared to those from the analytical predictions. The experiments correlate very well with the predictions, thus validating the dynamic simulation methodologies.

  8. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  9. Determining A Purely Symbolic Transfer Function from Symbol Streams: Theory and Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griffin, Christopher H

    Transfer function modeling is a \\emph{standard technique} in classical Linear Time Invariant and Statistical Process Control. The work of Box and Jenkins was seminal in developing methods for identifying parameters associated with classicalmore » $(r,s,k)$$ transfer functions. Discrete event systems are often \\emph{used} for modeling hybrid control structures and high-level decision problems. \\emph{Examples include} discrete time, discrete strategy repeated games. For these games, a \\emph{discrete transfer function in the form of} an accurate hidden Markov model of input-output relations \\emph{could be used to derive optimal response strategies.} In this paper, we develop an algorithm \\emph{for} creating probabilistic \\textit{Mealy machines} that act as transfer function models for discrete event dynamic systems (DEDS). Our models are defined by three parameters, $$(l_1, l_2, k)$ just as the Box-Jenkins transfer function models. Here $$l_1$$ is the maximal input history lengths to consider, $$l_2$$ is the maximal output history lengths to consider and $k$ is the response lag. Using related results, We show that our Mealy machine transfer functions are optimal in the sense that they maximize the mutual information between the current known state of the DEDS and the next observed input/output pair.« less

  10. Wind Farm Flow Modeling using an Input-Output Reduced-Order Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Annoni, Jennifer; Gebraad, Pieter; Seiler, Peter

    Wind turbines in a wind farm operate individually to maximize their own power regardless of the impact of aerodynamic interactions on neighboring turbines. There is the potential to increase power and reduce overall structural loads by properly coordinating turbines. To perform control design and analysis, a model needs to be of low computational cost, but retains the necessary dynamics seen in high-fidelity models. The objective of this work is to obtain a reduced-order model that represents the full-order flow computed using a high-fidelity model. A variety of methods, including proper orthogonal decomposition and dynamic mode decomposition, can be used tomore » extract the dominant flow structures and obtain a reduced-order model. In this paper, we combine proper orthogonal decomposition with a system identification technique to produce an input-output reduced-order model. This technique is used to construct a reduced-order model of the flow within a two-turbine array computed using a large-eddy simulation.« less

  11. Land Surface Process and Air Quality Research and Applications at MSFC

    NASA Technical Reports Server (NTRS)

    Quattrochi, Dale; Khan, Maudood

    2007-01-01

    This viewgraph presentation provides an overview of land surface process and air quality research at MSFC including atmospheric modeling and ongoing research whose objective is to undertake a comprehensive spatiotemporal analysis of the effects of accurate land surface characterization on atmospheric modeling results, and public health applications. Land use maps as well as 10 meter air temperature, surface wind, PBL mean difference heights, NOx, ozone, and O3+NO2 plots as well as spatial growth model outputs are included. Emissions and general air quality modeling are also discussed.

  12. RESULTS FROM KINEROS STREAM CHANNEL ELEMENTS MODEL OUTPUT THROUGH AGWA DIFFERENCING 1973 AND 1997 NALC LANDCOVER DATA

    EPA Science Inventory

    Results from differencing KINEROS model output through AGWA for Sierra Vista subwatershed. Percent change between 1973 and 1997 is presented for all KINEROS output values (and some derived from the KINEROS output by AGWA) for the stream channels.

  13. MARSTHERM: A Web-based System Providing Thermophysical Analysis Tools for Mars Research

    NASA Astrophysics Data System (ADS)

    Putzig, N. E.; Barratt, E. M.; Mellon, M. T.; Michaels, T. I.

    2013-12-01

    We introduce MARSTHERM, a web-based system that will allow researchers access to a standard numerical thermal model of the Martian near-surface and atmosphere. In addition, the system will provide tools for the derivation, mapping, and analysis of apparent thermal inertia from temperature observations by the Mars Global Surveyor Thermal Emission Spectrometer (TES) and the Mars Odyssey Thermal Emission Imaging System (THEMIS). Adjustable parameters for the thermal model include thermal inertia, albedo, surface pressure, surface emissivity, atmospheric dust opacity, latitude, surface slope angle and azimuth, season (solar longitude), and time steps for calculations and output. The model computes diurnal surface and brightness temperatures for either a single day or a full Mars year. Output options include text files and plots of seasonal and diurnal surface, brightness, and atmospheric temperatures. The tools for the derivation and mapping of apparent thermal inertia from spacecraft data are project-based, wherein the user provides an area of interest (AOI) by specifying latitude and longitude ranges. The system will then extract results within the AOI from prior global mapping of elevation (from the Mars Orbiter Laser Altimeter, for calculating surface pressure), TES annual albedo, and TES seasonal and annual-mean 2AM and 2PM apparent thermal inertia (Putzig and Mellon, 2007, Icarus 191, 68-94). In addition, a history of TES dust opacity within the AOI is computed. For each project, users may then provide a list of THEMIS images to process for apparent thermal inertia, optionally overriding the TES-derived dust opacity with a fixed value. Output from the THEMIS derivation process includes thumbnail and context images, GeoTIFF raster data, and HDF5 files containing arrays of input and output data (radiance, brightness temperature, apparent thermal inertia, elevation, quality flag, latitude, and longitude) and ancillary information. As a demonstration of capabilities, we will present results from a thermophysical study of Gale Crater (Barratt and Putzig, 2013, EPSC abstract 613), for which TES and THEMIS mapping has been carried out during system development. Public access to the MARSTHERM system will be provided in conjunction with the 2013 AGU Fall Meeting and will feature the numerical thermal model and thermal-inertia derivation algorithm developed by Mellon et al. (2000, Icarus 148, 437-455) as modified by Putzig and Mellon (2007, Icarus 191, 68-94). Updates to the thermal model and derivation algorithm that include a more sophisticated representation of the atmosphere and a layered subsurface are presently in development, and these will be incorporated into the system when they are available. Other planned enhancements include tools for modeling temperatures from horizontal mixtures of materials and slope facets, for comparing heterogeneity modeling results to TES and THEMIS results, and for mosaicking THEMIS images.

  14. Predicting the Magnetic Properties of ICMEs: A Pragmatic View

    NASA Astrophysics Data System (ADS)

    Riley, P.; Linker, J.; Ben-Nun, M.; Torok, T.; Ulrich, R. K.; Russell, C. T.; Lai, H.; de Koning, C. A.; Pizzo, V. J.; Liu, Y.; Hoeksema, J. T.

    2017-12-01

    The southward component of the interplanetary magnetic field plays a crucial role in being able to successfully predict space weather phenomena. Yet, thus far, it has proven extremely difficult to forecast with any degree of accuracy. In this presentation, we describe an empirically-based modeling framework for estimating Bz values during the passage of interplanetary coronal mass ejections (ICMEs). The model includes: (1) an empirically-based estimate of the magnetic properties of the flux rope in the low corona (including helicity and field strength); (2) an empirically-based estimate of the dynamic properties of the flux rope in the high corona (including direction, speed, and mass); and (3) a physics-based estimate of the evolution of the flux rope during its passage to 1 AU driven by the output from (1) and (2). We compare model output with observations for a selection of events to estimate the accuracy of this approach. Importantly, we pay specific attention to the uncertainties introduced by the components within the framework, separating intrinsic limitations from those that can be improved upon, either by better observations or more sophisticated modeling. Our analysis suggests that current observations/modeling are insufficient for this empirically-based framework to provide reliable and actionable prediction of the magnetic properties of ICMEs. We suggest several paths that may lead to better forecasts.

  15. A dynamic model using monitoring data and watershed characteristics to project fish tissue mercury concentrations in stream systems.

    PubMed

    Chan, Caroline; Heinbokel, John F; Myers, John A; Jacobs, Robert R

    2012-10-01

    A complex interplay of factors determines the degree of bioaccumulation of Hg in fish in any particular basin. Although certain watershed characteristics have been associated with higher or lower bioaccumulation rates, the relationships between these characteristics are poorly understood. To add to this understanding, a dynamic model was built to examine these relationships in stream systems. The model follows Hg from the water column, through microbial conversion and subsequent concentration, through the food web to piscivorous fish. The model was calibrated to 7 basins in Kentucky and further evaluated by comparing output to 7 sites in, or proximal to, the Ohio River Valley, an underrepresented region in the bioaccumulation literature. Water quality and basin characteristics were inputs into the model, with tissue concentrations of Hg of generic trophic level 3, 3.5, and 4 fish the output. Regulatory and monitoring data were used to calibrate and evaluate the model. Mean average prediction error for Kentucky sites was 26%, whereas mean error for evaluation sites was 51%. Variability within natural systems can be substantial and was quantified for fish tissue by analysis of the US Geological Survey National Fish Database. This analysis pointed to the need for more systematic sampling of fish tissue. Analysis of model output indicated that parameters that had the greatest impact on bioaccumulation influenced the system at several points. These parameters included forested and wetlands coverage and nutrient levels. Factors that were less sensitive modified the system at only 1 point and included the unfiltered total Hg input and the portion of the basin that is developed. Copyright © 2012 SETAC.

  16. TWINTAN: A program for transonic wall interference assessment in two-dimensional wind tunnels

    NASA Technical Reports Server (NTRS)

    Kemp, W. B., Jr.

    1980-01-01

    A method for assessing the wall interference in transonic two dimensional wind tunnel test was developed and implemented in a computer program. The method involves three successive solutions of the transonic small disturbance potential equation to define the wind tunnel flow, the perturbation attriburable to the model, and the equivalent free air flow around the model. Input includes pressure distributions on the model and along the top and bottom tunnel walls which are used as boundary conditions for the wind tunnel flow. The wall induced perturbation fields is determined as the difference between the perturbation in the tunnel flow solution and the perturbation attributable to the model. The methodology used in the program is described and detailed descriptions of the computer program input and output are presented. Input and output for a sample case are given.

  17. Modeling the Afferent Dynamics of the Baroreflex Control System

    PubMed Central

    Mahdi, Adam; Sturdy, Jacob; Ottesen, Johnny T.; Olufsen, Mette S.

    2013-01-01

    In this study we develop a modeling framework for predicting baroreceptor firing rate as a function of blood pressure. We test models within this framework both quantitatively and qualitatively using data from rats. The models describe three components: arterial wall deformation, stimulation of mechanoreceptors located in the BR nerve-endings, and modulation of the action potential frequency. The three sub-systems are modeled individually following well-established biological principles. The first submodel, predicting arterial wall deformation, uses blood pressure as an input and outputs circumferential strain. The mechanoreceptor stimulation model, uses circumferential strain as an input, predicting receptor deformation as an output. Finally, the neural model takes receptor deformation as an input predicting the BR firing rate as an output. Our results show that nonlinear dependence of firing rate on pressure can be accounted for by taking into account the nonlinear elastic properties of the artery wall. This was observed when testing the models using multiple experiments with a single set of parameters. We find that to model the response to a square pressure stimulus, giving rise to post-excitatory depression, it is necessary to include an integrate-and-fire model, which allows the firing rate to cease when the stimulus falls below a given threshold. We show that our modeling framework in combination with sensitivity analysis and parameter estimation can be used to test and compare models. Finally, we demonstrate that our preferred model can exhibit all known dynamics and that it is advantageous to combine qualitative and quantitative analysis methods. PMID:24348231

  18. Forecasting the Effects of Higher Education Appropriations on Local Economies. AIR 1986 Annual Forum Paper.

    ERIC Educational Resources Information Center

    Prewitt, Sidney A.; And Others

    An economic model of the effects of colleges on their communities was developed. The Texas Input-Output Model was modified into a higher education budgetary model. Included were the positive benefits of tax savings and estimates of the net effect on various communities in which state-supported colleges and universities are located. The output…

  19. A New Formulation of the Filter-Error Method for Aerodynamic Parameter Estimation in Turbulence

    NASA Technical Reports Server (NTRS)

    Grauer, Jared A.; Morelli, Eugene A.

    2015-01-01

    A new formulation of the filter-error method for estimating aerodynamic parameters in nonlinear aircraft dynamic models during turbulence was developed and demonstrated. The approach uses an estimate of the measurement noise covariance to identify the model parameters, their uncertainties, and the process noise covariance, in a relaxation method analogous to the output-error method. Prior information on the model parameters and uncertainties can be supplied, and a post-estimation correction to the uncertainty was included to account for colored residuals not considered in the theory. No tuning parameters, needing adjustment by the analyst, are used in the estimation. The method was demonstrated in simulation using the NASA Generic Transport Model, then applied to the subscale T-2 jet-engine transport aircraft flight. Modeling results in different levels of turbulence were compared with results from time-domain output error and frequency- domain equation error methods to demonstrate the effectiveness of the approach.

  20. Macroeconomics and oil-supply disruptions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hubbard, R.G.; Fry, R.C. Jr.

    1981-04-01

    Energy-economy interactions and domestic linkages have been used in a system of models. Domestic economic aggregates are linked with a model of the world oil market by a core macroeconomic model with real and financial sectors. The model can be used to examine the policy ramifications of various short-run scenarios. Demand factors are not taken as exogenous to the world oil market, nor are oil prices taken as exogenous to the US economy. Simulations of the model have generated endogenous cycles in the world oil market; which then affect the US economy primarily through output and inflation channels. Policy simulationmore » was centered around the short-run imposition of a disruption tariff. The disruption tariff exhibited at least some of the desirable features noted by its proponents, though it did not function as a shield against the short-run output loss forced by the disruption. One might also simulate the rebate of tariff revenues as a reduction in the social security payroll tax. Other possible simulations include the use of any of the fiscal and monetary instruments included in the model. The effectiveness of these other policy instruments will be examined in a later paper.« less

  1. Rainfall or parameter uncertainty? The power of sensitivity analysis on grouped factors

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2017-04-01

    Hydrological models are typically used to study and represent (a part of) the hydrological cycle. In general, the output of these models mostly depends on their input rainfall and parameter values. Both model parameters and input precipitation however, are characterized by uncertainties and, therefore, lead to uncertainty on the model output. Sensitivity analysis (SA) allows to assess and compare the importance of the different factors for this output uncertainty. Hereto, the rainfall uncertainty can be incorporated in the SA by representing it as a probabilistic multiplier. Such multiplier can be defined for the entire time series, or several of these factors can be determined for every recorded rainfall pulse or for hydrological independent storm events. As a consequence, the number of parameters included in the SA related to the rainfall uncertainty can be (much) lower or (much) higher than the number of model parameters. Although such analyses can yield interesting results, it remains challenging to determine which type of uncertainty will affect the model output most due to the different weight both types will have within the SA. In this study, we apply the variance based Sobol' sensitivity analysis method to two different hydrological simulators (NAM and HyMod) for four diverse watersheds. Besides the different number of model parameters (NAM: 11 parameters; HyMod: 5 parameters), the setup of our sensitivity and uncertainty analysis-combination is also varied by defining a variety of scenarios including diverse numbers of rainfall multipliers. To overcome the issue of the different number of factors and, thus, the different weights of the two types of uncertainty, we build on one of the advantageous properties of the Sobol' SA, i.e. treating grouped parameters as a single parameter. The latter results in a setup with a single factor for each uncertainty type and allows for a straightforward comparison of their importance. In general, the results show a clear influence of the weights in the different SA scenarios. However, working with grouped factors resolves this issue and leads to clear importance results.

  2. Multi-Fidelity Uncertainty Propagation for Cardiovascular Modeling

    NASA Astrophysics Data System (ADS)

    Fleeter, Casey; Geraci, Gianluca; Schiavazzi, Daniele; Kahn, Andrew; Marsden, Alison

    2017-11-01

    Hemodynamic models are successfully employed in the diagnosis and treatment of cardiovascular disease with increasing frequency. However, their widespread adoption is hindered by our inability to account for uncertainty stemming from multiple sources, including boundary conditions, vessel material properties, and model geometry. In this study, we propose a stochastic framework which leverages three cardiovascular model fidelities: 3D, 1D and 0D models. 3D models are generated from patient-specific medical imaging (CT and MRI) of aortic and coronary anatomies using the SimVascular open-source platform, with fluid structure interaction simulations and Windkessel boundary conditions. 1D models consist of a simplified geometry automatically extracted from the 3D model, while 0D models are obtained from equivalent circuit representations of blood flow in deformable vessels. Multi-level and multi-fidelity estimators from Sandia's open-source DAKOTA toolkit are leveraged to reduce the variance in our estimated output quantities of interest while maintaining a reasonable computational cost. The performance of these estimators in terms of computational cost reductions is investigated for a variety of output quantities of interest, including global and local hemodynamic indicators. Sandia National Labs is a multimission laboratory managed and operated by NTESS, LLC, for the U.S. DOE under contract DE-NA0003525. Funding for this project provided by NIH-NIBIB R01 EB018302.

  3. U1108 performance model

    NASA Technical Reports Server (NTRS)

    Trachta, G.

    1976-01-01

    A model of Univac 1108 work flow has been developed to assist in performance evaluation studies and configuration planning. Workload profiles and system configurations are parameterized for ease of experimental modification. Outputs include capacity estimates and performance evaluation functions. The U1108 system is conceptualized as a service network; classical queueing theory is used to evaluate network dynamics.

  4. Correlation Between Hierarchical Bayesian and Aerosol Optical Depth PM2.5 Data and Respiratory-Cardiovascular Chronic Diseases

    EPA Science Inventory

    Tools to estimate PM2.5 mass have expanded in recent years, and now include: 1) stationary monitor readings, 2) Community Multi-Scale Air Quality (CMAQ) model estimates, 3) Hierarchical Bayesian (HB) estimates from combined stationary monitor readings and CMAQ model output; and, ...

  5. High-resolution dynamic downscaling of CMIP5 output over the Tropical Andes

    NASA Astrophysics Data System (ADS)

    Reichler, Thomas; Andrade, Marcos; Ohara, Noriaki

    2015-04-01

    Our project is targeted towards making robust predictions of future changes in climate over the tropical part of the South American Andes. This goal is challenging, since tropical lowlands, steep mountains, and snow covered subarctic surfaces meet over relatively short distances, leading to distinct climate regimes within the same domain and pronounced spatial gradients in virtually every climate quantity. We use an innovative approach to solve this problem, including several quadruple nested versions of WRF, a systematic validation strategy to find the version of WRF that best fits our study region, spatial resolutions at the kilometer scale, 20-year-long simulation periods, and bias-corrected output from various CMIP5 simulations that also include the multi-model mean of all CMIP5 models. We show that the simulated changes in climate are consistent with the results from the global climate models and also consistent with two different versions of WRF. We also discuss the expected changes in snow and ice, derived from off-line coupling the regional simulations to a carefully calibrated snow and ice model.

  6. A numerical model on thermodynamic analysis of free piston Stirling engines

    NASA Astrophysics Data System (ADS)

    Mou, Jian; Hong, Guotong

    2017-02-01

    In this paper, a new numerical thermodynamic model which bases on the energy conservation law has been used to analyze the free piston Stirling engine. In the model all data was taken from a real free piston Stirling engine which has been built in our laboratory. The energy conservation equations have been applied to expansion space and compression space of the engine. The equation includes internal energy, input power, output power, enthalpy and the heat losses. The heat losses include regenerative heat conduction loss, shuttle heat loss, seal leakage loss and the cavity wall heat conduction loss. The numerical results show that the temperature of expansion space and the temperature of compression space vary with the time. The higher regeneration effectiveness, the higher efficiency and bigger output work. It is also found that under different initial pressures, the heat source temperature, phase angle and engine work frequency pose different effects on the engine’s efficiency and power. As a result, the model is expected to be a useful tool for simulation, design and optimization of Stirling engines.

  7. Ensemble Solar Forecasting Statistical Quantification and Sensitivity Analysis: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheung, WanYin; Zhang, Jie; Florita, Anthony

    2015-12-08

    Uncertainties associated with solar forecasts present challenges to maintain grid reliability, especially at high solar penetrations. This study aims to quantify the errors associated with the day-ahead solar forecast parameters and the theoretical solar power output for a 51-kW solar power plant in a utility area in the state of Vermont, U.S. Forecasts were generated by three numerical weather prediction (NWP) models, including the Rapid Refresh, the High Resolution Rapid Refresh, and the North American Model, and a machine-learning ensemble model. A photovoltaic (PV) performance model was adopted to calculate theoretical solar power generation using the forecast parameters (e.g., irradiance,more » cell temperature, and wind speed). Errors of the power outputs were quantified using statistical moments and a suite of metrics, such as the normalized root mean squared error (NRMSE). In addition, the PV model's sensitivity to different forecast parameters was quantified and analyzed. Results showed that the ensemble model yielded forecasts in all parameters with the smallest NRMSE. The NRMSE of solar irradiance forecasts of the ensemble NWP model was reduced by 28.10% compared to the best of the three NWP models. Further, the sensitivity analysis indicated that the errors of the forecasted cell temperature attributed only approximately 0.12% to the NRMSE of the power output as opposed to 7.44% from the forecasted solar irradiance.« less

  8. Estimating decades-long trends in petroleum field energy return on investment (EROI) with an engineering-based model.

    PubMed

    Tripathi, Vinay S; Brandt, Adam R

    2017-01-01

    This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases.

  9. Estimating decades-long trends in petroleum field energy return on investment (EROI) with an engineering-based model

    PubMed Central

    Tripathi, Vinay S.

    2017-01-01

    This paper estimates changes in the energy return on investment (EROI) for five large petroleum fields over time using the Oil Production Greenhouse Gas Emissions Estimator (OPGEE). The modeled fields include Cantarell (Mexico), Forties (U.K.), Midway-Sunset (U.S.), Prudhoe Bay (U.S.), and Wilmington (U.S.). Data on field properties and production/processing parameters were obtained from a combination of government and technical literature sources. Key areas of uncertainty include details of the oil and gas surface processing schemes. We aim to explore how long-term trends in depletion at major petroleum fields change the effective energetic productivity of petroleum extraction. Four EROI ratios are estimated for each field as follows: The net energy ratio (NER) and external energy ratio (EER) are calculated, each using two measures of energy outputs, (1) oil-only and (2) all energy outputs. In all cases, engineering estimates of inputs are used rather than expenditure-based estimates (including off-site indirect energy use and embodied energy). All fields display significant declines in NER over the modeling period driven by a combination of (1) reduced petroleum production and (2) increased energy expenditures on recovery methods such as the injection of water, steam, or gas. The fields studied had NER reductions ranging from 46% to 88% over the modeling periods (accounting for all energy outputs). The reasons for declines in EROI differ by field. Midway-Sunset experienced a 5-fold increase in steam injected per barrel of oil produced. In contrast, Prudhoe Bay has experienced nearly a 30-fold increase in amount of gas processed and reinjected per unit of oil produced. In contrast, EER estimates are subject to greater variability and uncertainty due to the relatively small magnitude of external energy investments in most cases. PMID:28178318

  10. Algorithms for output feedback, multiple-model, and decentralized control problems

    NASA Technical Reports Server (NTRS)

    Halyo, N.; Broussard, J. R.

    1984-01-01

    The optimal stochastic output feedback, multiple-model, and decentralized control problems with dynamic compensation are formulated and discussed. Algorithms for each problem are presented, and their relationship to a basic output feedback algorithm is discussed. An aircraft control design problem is posed as a combined decentralized, multiple-model, output feedback problem. A control design is obtained using the combined algorithm. An analysis of the design is presented.

  11. The life-cycle research productivity of mathematicians and scientists.

    PubMed

    Diamond, A M

    1986-07-01

    Declining research productivity with age is implied by economic models of life-cycle human capital investment but is denied by some recent empirical studies. The purpose of the present study is to provide new evidence on whether a scientist's output generally declines with advancing age. A longitudinal data set has been compiled for scientists and mathematicians at six major departments, including data on age, salaries, annual citations (stock of human capital), citations to current output (flow of human capital), and quantity of current output measured both in number of articles and in number of pages. Analysis of the data indicates that salaries peak from the early to mid-60s, whereas annual citations appear to peak from age 39 to 89 for different departments with a mean age of 59 for the 6 departments. The quantity and quality of current research output appear to decline continuously with age.

  12. Assessment of the Value, Impact, and Validity of the Jobs and Economic Development Impacts (JEDI) Suite of Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Billman, L.; Keyser, D.

    The Jobs and Economic Development Impacts (JEDI) models, developed by the National Renewable Energy Laboratory (NREL) for the U.S. Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy (EERE), use input-output methodology to estimate gross (not net) jobs and economic impacts of building and operating selected types of renewable electricity generation and fuel plants. This analysis provides the DOE with an assessment of the value, impact, and validity of the JEDI suite of models. While the models produce estimates of jobs, earnings, and economic output, this analysis focuses only on jobs estimates. This validation report includes an introductionmore » to JEDI models, an analysis of the value and impact of the JEDI models, and an analysis of the validity of job estimates generated by JEDI model through comparison to other modeled estimates and comparison to empirical, observed jobs data as reported or estimated for a commercial project, a state, or a region.« less

  13. Emergent central pattern generator behavior in chemical coupled two-compartment models with time delay

    NASA Astrophysics Data System (ADS)

    Li, Shanshan; Zhang, Guoshan; Wang, Jiang; Chen, Yingyuan; Deng, Bin

    2018-02-01

    This paper proposes that modified two-compartment Pinsky-Rinzel (PR) neural model can be used to develop the simple form of central pattern generator (CPG). The CPG is called as 'half-central oscillator', which constructed by two inhibitory chemical coupled PR neurons with time delay. Some key properties of PR neural model related to CPG are studied and proved to meet the requirements of CPG. Using the simple CPG network, we first study the relationship between rhythmical output and key factors, including ambient noise, sensory feedback signals, morphological character of single neuron as well as the coupling delay time. We demonstrate that, appropriate intensity noise can enhance synchronization between two coupled neurons. Different output rhythm of CPG network can be entrained by sensory feedback signals. We also show that the morphology of single neuron has strong effect on the output rhythm. The phase synchronization indexes decrease with the increase of morphology parameter's difference. Through adjusting coupled delay time, we can get absolutely phase synchronization and antiphase state of CPG. Those results of simulation show the feasibility of PR neural model as a valid CPG as well as the emergent behaviors of the particularly CPG.

  14. The user's guide to STEMS (Stand and Tree Evaluation and Modeling System).

    Treesearch

    David M. Belcher

    1981-01-01

    Presents the structure of STEMS, a computer program for projecting growth of individual trees within the Lake States Region, and discusses its input, processing, major subsystems, and output. Includes an example projection.

  15. An on-line equivalent system identification scheme for adaptive control. Ph.D. Thesis - Stanford Univ.

    NASA Technical Reports Server (NTRS)

    Sliwa, S. M.

    1984-01-01

    A prime obstacle to the widespread use of adaptive control is the degradation of performance and possible instability resulting from the presence of unmodeled dynamics. The approach taken is to explicitly include the unstructured model uncertainty in the output error identification algorithm. The order of the compensator is successively increased by including identified modes. During this model building stage, heuristic rules are used to test for convergence prior to designing compensators. Additionally, the recursive identification algorithm as extended to multi-input, multi-output systems. Enhancements were also made to reduce the computational burden of an algorithm for obtaining minimal state space realizations from the inexact, multivariate transfer functions which result from the identification process. A number of potential adaptive control applications for this approach are illustrated using computer simulations. Results indicated that when speed of adaptation and plant stability are not critical, the proposed schemes converge to enhance system performance.

  16. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  17. Validation of a new modal performance measure for flexible controllers design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simo, J.B.; Tahan, S.A.; Kamwa, I.

    1996-05-01

    A new modal performance measure for power system stabilizer (PSS) optimization is proposed in this paper. The new method is based on modifying the square envelopes of oscillating modes, in order to take into account their damping ratios while minimizing the performance index. This criteria is applied to flexible controllers optimal design, on a multi-input-multi-output (MIMO) reduced-order model of a prototype power system. The multivariable model includes four generators, each having one input and one output. Linear time-response simulation and transient stability analysis with a nonlinear package confirm the superiority of the proposed criteria and illustrate its effectiveness in decentralizedmore » control.« less

  18. Rainfall Data Simulation

    Treesearch

    T.L. Rogerson

    1980-01-01

    A simple simulation model to predict rainfall for individual storms in central Arkansas is described. Output includes frequency distribution tables for days between storms and for storm size classes; a storm summary by day number (January 1 = 1 and December 31 = 365) and rainfall amount; and an annual storm summary that includes monthly values for rainfall and number...

  19. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  20. Developing Emergency Room Key Performance Indicators: What to Measure and Why Should We Measure It?

    PubMed

    Khalifa, Mohamed; Zabani, Ibrahim

    2016-01-01

    Emergency Room (ER) performance has been a timely topic for both healthcare practitioners and researchers. King Faisal Specialist Hospital and Research Center, Saudi Arabia worked on developing a comprehensive set of KPIs to monitor, evaluate and improve the performance of the ER. A combined approach using quantitative and qualitative methods was used to collect and analyze the data. 34 KPIs were developed and sorted into the three components of the ER patient flow model; input, throughput and output. Input indicators included number and acuity of ER patients, patients leaving without being seen and revisit rates. Throughput indicators included number of active ER beds, ratio of ER patients to ER staff and the length of stay including waiting time and treatment time. The turnaround time of supportive services, such as lab, radiology and medications, were also included. Output indicators include boarding time and available hospital beds, ICU beds and patients waiting for admission.

  1. The NASA Marshall engineering thermosphere model

    NASA Technical Reports Server (NTRS)

    Hickey, Michael Philip

    1988-01-01

    Described is the NASA Marshall Engineering Thermosphere (MET) Model, which is a modified version of the MFSC/J70 Orbital Atmospheric Density Model as currently used in the J70MM program at MSFC. The modifications to the MFSC/J70 model required for the MET model are described, graphical and numerical examples of the models are included, as is a listing of the MET model computer program. Major differences between the numerical output from the MET model and the MFSC/J70 model are discussed.

  2. GMLC Extreme Event Modeling -- Slow-Dynamics Models for Renewable Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, M.; Min, L.

    The need for slow dynamics models of renewable resources in cascade modeling essentially arises from the challenges associated with the increased use of solar and wind electric power. Indeed, the main challenge is that the power produced by wind and sunlight is not consistent; thus, renewable energy resources tend to have variable output power on many different timescales, including the timescales that a cascade unfolds.

  3. Assessment of reservoir system variable forecasts

    NASA Astrophysics Data System (ADS)

    Kistenmacher, Martin; Georgakakos, Aris P.

    2015-05-01

    Forecast ensembles are a convenient means to model water resources uncertainties and to inform planning and management processes. For multipurpose reservoir systems, forecast types include (i) forecasts of upcoming inflows and (ii) forecasts of system variables and outputs such as reservoir levels, releases, flood damage risks, hydropower production, water supply withdrawals, water quality conditions, navigation opportunities, and environmental flows, among others. Forecasts of system variables and outputs are conditional on forecasted inflows as well as on specific management policies and can provide useful information for decision-making processes. Unlike inflow forecasts (in ensemble or other forms), which have been the subject of many previous studies, reservoir system variable and output forecasts are not formally assessed in water resources management theory or practice. This article addresses this gap and develops methods to rectify potential reservoir system forecast inconsistencies and improve the quality of management-relevant information provided to stakeholders and managers. The overarching conclusion is that system variable and output forecast consistency is critical for robust reservoir management and needs to be routinely assessed for any management model used to inform planning and management processes. The above are demonstrated through an application from the Sacramento-American-San Joaquin reservoir system in northern California.

  4. Dry-bean production under climate change conditions in the north of Argentina: Risk assessment and economic implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feijoo, M.; Mestre, F.; Castagnaro, A.

    This study evaluates the potential effect of climate change on Dry-bean production in Argentina, combining climate models, a crop productivity model and a yield response model estimation of climate variables on crop yields. The study was carried out in the North agricultural regions of Jujuy, Salta, Santiago del Estero and Tucuman which include the largest areas of Argentina where dry beans are grown as a high input crop. The paper combines the output from a crop model with different techniques of analysis. The scenarios used in this study were generated from the output of two General Circulation Models (GCMs): themore » Goddard Institute for Space Studies model (GISS) and the Canadian Climate Change Model (CCCM). The study also includes a preliminary evaluation of the potential changes in monetary returns taking into account the possible variability of yields and prices, using mean-Gini stochastic dominance (MGSD). The results suggest that large climate change may have a negative impact on the Argentine agriculture sector, due to the high relevance of this product in the export sector. The difference negative effect depends on the varieties of dry bean and also the General Circulation Model scenarios considered for double levels of atmospheric carbon dioxide.« less

  5. Top-down methodology for human factors research

    NASA Technical Reports Server (NTRS)

    Sibert, J.

    1983-01-01

    User computer interaction as a conversation is discussed. The design of user interfaces which depends on viewing communications between a user and the computer as a conversion is presented. This conversation includes inputs to the computer (outputs from the user), outputs from the computer (inputs to the user), and the sequencing in both time and space of those outputs and inputs. The conversation is viewed from the user's side of the conversation. Two languages are modeled: the one with which the user communicates with the computer and the language where communication flows from the computer to the user. Both languages exist on three levels; the semantic, syntactic and lexical. It is suggested that natural languages can also be considered in these terms.

  6. Robust H∞ output-feedback control for path following of autonomous ground vehicles

    NASA Astrophysics Data System (ADS)

    Hu, Chuan; Jing, Hui; Wang, Rongrong; Yan, Fengjun; Chadli, Mohammed

    2016-03-01

    This paper presents a robust H∞ output-feedback control strategy for the path following of autonomous ground vehicles (AGVs). Considering the vehicle lateral velocity is usually hard to measure with low cost sensor, a robust H∞ static output-feedback controller based on the mixed genetic algorithms (GA)/linear matrix inequality (LMI) approach is proposed to realize the path following without the information of the lateral velocity. The proposed controller is robust to the parametric uncertainties and external disturbances, with the parameters including the tire cornering stiffness, vehicle longitudinal velocity, yaw rate and road curvature. Simulation results based on CarSim-Simulink joint platform using a high-fidelity and full-car model have verified the effectiveness of the proposed control approach.

  7. SU-F-T-143: Implementation of a Correction-Based Output Model for a Compact Passively Scattered Proton Therapy System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferguson, S; Ahmad, S; Chen, Y

    2016-06-15

    Purpose: To commission and investigate the accuracy of an output (cGy/MU) prediction model for a compact passively scattered proton therapy system. Methods: A previously published output prediction model (Sahoo et al, Med Phys, 35, 5088–5097, 2008) was commissioned for our Mevion S250 proton therapy system. This model is a correction-based model that multiplies correction factors (d/MUwnc=ROFxSOBPF xRSFxSOBPOCFxOCRxFSFxISF). These factors accounted for changes in output due to options (12 large, 5 deep, and 7 small), modulation width M, range R, off-center, off-axis, field-size, and off-isocenter. In this study, the model was modified to ROFxSOBPFxRSFxOCRxFSFxISF-OCFxGACF by merging SOBPOCF and ISF for simplicitymore » and introducing a gantry angle correction factor (GACF). To commission the model, outputs over 1,000 data points were taken at the time of the system commissioning. The output was predicted by interpolation (1D for SOBPF, FSF, and GACF; 2D for RSF and OCR) with inverse-square calculation (ISF-OCR). The outputs of 273 combinations of R and M covering total 24 options were measured to test the model. To minimize fluence perturbation, scattered dose from range compensator and patient was not considered. The percent differences between the predicted (P) and measured (M) outputs were calculated to test the prediction accuracy ([P-M]/Mx100%). Results: GACF was required because of up to 3.5% output variation dependence on the gantry angle. A 2D interpolation was required for OCR because the dose distribution was not radially symmetric especially for the deep options. The average percent differences were −0.03±0.98% (mean±SD) and the differences of all the measurements fell within ±3%. Conclusion: It is concluded that the model can be clinically used for the compact passively scattered proton therapy system. However, great care should be taken when the field-size is less than 5×5 cm{sup 2} where a direct output measurement is required due to substantial output change by irregular block shape.« less

  8. Making Sense of Complexity with FRE, a Scientific Workflow System for Climate Modeling (Invited)

    NASA Astrophysics Data System (ADS)

    Langenhorst, A. R.; Balaji, V.; Yakovlev, A.

    2010-12-01

    A workflow is a description of a sequence of activities that is both precise and comprehensive. Capturing the workflow of climate experiments provides a record which can be queried or compared, and allows reproducibility of the experiments - sometimes even to the bit level of the model output. This reproducibility helps to verify the integrity of the output data, and enables easy perturbation experiments. GFDL's Flexible Modeling System Runtime Environment (FRE) is a production-level software project which defines and implements building blocks of the workflow as command line tools. The scientific, numerical and technical input needed to complete the workflow of an experiment is recorded in an experiment description file in XML format. Several key features add convenience and automation to the FRE workflow: ● Experiment inheritance makes it possible to define a new experiment with only a reference to the parent experiment and the parameters to override. ● Testing is a basic element of the FRE workflow: experiments define short test runs which are verified before the main experiment is run, and a set of standard experiments are verified with new code releases. ● FRE is flexible enough to support short runs with mere megabytes of data, to high-resolution experiments that run on thousands of processors for months, producing terabytes of output data. Experiments run in segments of model time; after each segment, the state is saved and the model can be checkpointed at that level. Segment length is defined by the user, but the number of segments per system job is calculated to fit optimally in the batch scheduler requirements. FRE provides job control across multiple segments, and tools to monitor and alter the state of long-running experiments. ● Experiments are entered into a Curator Database, which stores query-able metadata about the experiment and the experiment's output. ● FRE includes a set of standardized post-processing functions as well as the ability to incorporate user-level functions. FRE post-processing can take us all the way to the preparing of graphical output for a scientific audience, and publication of data on a public portal. ● Recent FRE development includes incorporating a distributed workflow to support remote computing.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kang, D. I.; Han, S. H.

    A PSA analyst has been manually determining fire-induced component failure modes and modeling them into the PSA logics. These can be difficult and time-consuming tasks as they need much information and many events are to be modeled. KAERI has been developing the IPRO-ZONE (interface program for constructing zone effect table) to facilitate fire PSA works for identifying and modeling fire-induced component failure modes, and to construct a one top fire event PSA model. With the output of the IPRO-ZONE, the AIMS-PSA, and internal event one top PSA model, one top fire events PSA model is automatically constructed. The outputs ofmore » the IPRO-ZONE include information on fire zones/fire scenarios, fire propagation areas, equipment failure modes affected by a fire, internal PSA basic events corresponding to fire-induced equipment failure modes, and fire events to be modeled. This paper introduces the IPRO-ZONE, and its application results to fire PSA of Ulchin Unit 3 and SMART(System-integrated Modular Advanced Reactor). (authors)« less

  10. Does player unavailability affect football teams' match physical outputs? A two-season study of the UEFA champions league.

    PubMed

    Windt, Johann; Ekstrand, Jan; Khan, Karim M; McCall, Alan; Zumbo, Bruno D

    2018-05-01

    Player unavailability negatively affects team performance in elite football. However, whether player unavailability and its concomitant performance decrement is mediated by any changes in teams' match physical outputs is unknown. We examined whether the number of players injured (i.e. unavailable for match selection) was associated with any changes in teams' physical outputs. Prospective cohort study. Between-team variation was calculated by correlating average team availability with average physical outputs. Within-team variation was quantified using linear mixed modelling, using physical outputs - total distance, sprint count (efforts over 20km/h), and percent of distance covered at high speeds (>14km/h) - as outcome variables, and player unavailability as the independent variable of interest. To control for other factors that may influence match physical outputs, stage (group stage/knockout), venue (home/away), score differential, ball possession (%), team ranking (UEFA Club Coefficient), and average team age were all included as covariates. Teams' average player unavailability was positively associated with the average number of sprints they performed in matches across two seasons. Multilevel models similarly demonstrated that having 4 unavailable players was associated with 20.8 more sprints during matches in 2015/2016, and with an estimated 0.60-0.77% increase in the proportion of total distance run above 14km/h in both seasons. Player unavailability had a possibly positive and likely positive association with total match distances in the two respective seasons. Having more players injured and unavailable for match selection was associated with an increase in teams' match physical outputs. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  11. Java-based Graphical User Interface for MAVERIC-II

    NASA Technical Reports Server (NTRS)

    Seo, Suk Jai

    2005-01-01

    A computer program entitled "Marshall Aerospace Vehicle Representation in C II, (MAVERIC-II)" is a vehicle flight simulation program written primarily in the C programming language. It is written by James W. McCarter at NASA/Marshall Space Flight Center. The goal of the MAVERIC-II development effort is to provide a simulation tool that facilitates the rapid development of high-fidelity flight simulations for launch, orbital, and reentry vehicles of any user-defined configuration for all phases of flight. MAVERIC-II has been found invaluable in performing flight simulations for various Space Transportation Systems. The flexibility provided by MAVERIC-II has allowed several different launch vehicles, including the Saturn V, a Space Launch Initiative Two-Stage-to-Orbit concept and a Shuttle-derived launch vehicle, to be simulated during ascent and portions of on-orbit flight in an extremely efficient manner. It was found that MAVERIC-II provided the high fidelity vehicle and flight environment models as well as the program modularity to allow efficient integration, modification and testing of advanced guidance and control algorithms. In addition to serving as an analysis tool for techno logy development, many researchers have found MAVERIC-II to be an efficient, powerful analysis tool that evaluates guidance, navigation, and control designs, vehicle robustness, and requirements. MAVERIC-II is currently designed to execute in a UNIX environment. The input to the program is composed of three segments: 1) the vehicle models such as propulsion, aerodynamics, and guidance, navigation, and control 2) the environment models such as atmosphere and gravity, and 3) a simulation framework which is responsible for executing the vehicle and environment models and propagating the vehicle s states forward in time and handling user input/output. MAVERIC users prepare data files for the above models and run the simulation program. They can see the output on screen and/or store in files and examine the output data later. Users can also view the output stored in output files by calling a plotting program such as gnuplot. A typical scenario of the use of MAVERIC consists of three-steps; editing existing input data files, running MAVERIC, and plotting output results.

  12. Low-thrust solar electric propulsion navigation simulation program

    NASA Technical Reports Server (NTRS)

    Hagar, H. J.; Eller, T. J.

    1973-01-01

    An interplanetary low-thrust, solar electric propulsion mission simulation program suitable for navigation studies is presented. The mathematical models for trajectory simulation, error compensation, and tracking motion are described. The languages, input-output procedures, and subroutines are included.

  13. Model reference adaptive control of flexible robots in the presence of sudden load changes

    NASA Technical Reports Server (NTRS)

    Steinvorth, Rodrigo; Kaufman, Howard; Neat, Gregory

    1991-01-01

    Direct command generator tracker based model reference adaptive control (MRAC) algorithms are applied to the dynamics for a flexible-joint arm in the presence of sudden load changes. Because of the need to satisfy a positive real condition, such MRAC procedures are designed so that a feedforward augmented output follows the reference model output, thus, resulting in an ultimately bounded rather than zero output error. Thus, modifications are suggested and tested that: (1) incorporate feedforward into the reference model's output as well as the plant's output, and (2) incorporate a derivative term into only the process feedforward loop. The results of these simulations give a response with zero steady state model following error, and thus encourage further use of MRAC for more complex flexibile robotic systems.

  14. Pandemic recovery analysis using the dynamic inoperability input-output model.

    PubMed

    Santos, Joost R; Orsi, Mark J; Bond, Erik J

    2009-12-01

    Economists have long conceptualized and modeled the inherent interdependent relationships among different sectors of the economy. This concept paved the way for input-output modeling, a methodology that accounts for sector interdependencies governing the magnitude and extent of ripple effects due to changes in the economic structure of a region or nation. Recent extensions to input-output modeling have enhanced the model's capabilities to account for the impact of an economic perturbation; two such examples are the inoperability input-output model((1,2)) and the dynamic inoperability input-output model (DIIM).((3)) These models introduced sector inoperability, or the inability to satisfy as-planned production levels, into input-output modeling. While these models provide insights for understanding the impacts of inoperability, there are several aspects of the current formulation that do not account for complexities associated with certain disasters, such as a pandemic. This article proposes further enhancements to the DIIM to account for economic productivity losses resulting primarily from workforce disruptions. A pandemic is a unique disaster because the majority of its direct impacts are workforce related. The article develops a modeling framework to account for workforce inoperability and recovery factors. The proposed workforce-explicit enhancements to the DIIM are demonstrated in a case study to simulate a pandemic scenario in the Commonwealth of Virginia.

  15. Automatic detection of echolocation clicks based on a Gabor model of their waveform.

    PubMed

    Madhusudhana, Shyam; Gavrilov, Alexander; Erbe, Christine

    2015-06-01

    Prior research has shown that echolocation clicks of several species of terrestrial and marine fauna can be modelled as Gabor-like functions. Here, a system is proposed for the automatic detection of a variety of such signals. By means of mathematical formulation, it is shown that the output of the Teager-Kaiser Energy Operator (TKEO) applied to Gabor-like signals can be approximated by a Gaussian function. Based on the inferences, a detection algorithm involving the post-processing of the TKEO outputs is presented. The ratio of the outputs of two moving-average filters, a Gaussian and a rectangular filter, is shown to be an effective detection parameter. Detector performance is assessed using synthetic and real (taken from MobySound database) recordings. The detection method is shown to work readily with a variety of echolocation clicks and in various recording scenarios. The system exhibits low computational complexity and operates several times faster than real-time. Performance comparisons are made to other publicly available detectors including pamguard.

  16. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang Yumin; Lum, Kai-Yew; Wang Qingguo

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus,more » the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.« less

  17. A H-infinity Fault Detection and Diagnosis Scheme for Discrete Nonlinear System Using Output Probability Density Estimation

    NASA Astrophysics Data System (ADS)

    Zhang, Yumin; Wang, Qing-Guo; Lum, Kai-Yew

    2009-03-01

    In this paper, a H-infinity fault detection and diagnosis (FDD) scheme for a class of discrete nonlinear system fault using output probability density estimation is presented. Unlike classical FDD problems, the measured output of the system is viewed as a stochastic process and its square root probability density function (PDF) is modeled with B-spline functions, which leads to a deterministic space-time dynamic model including nonlinearities, uncertainties. A weighting mean value is given as an integral function of the square root PDF along space direction, which leads a function only about time and can be used to construct residual signal. Thus, the classical nonlinear filter approach can be used to detect and diagnose the fault in system. A feasible detection criterion is obtained at first, and a new H-infinity adaptive fault diagnosis algorithm is further investigated to estimate the fault. Simulation example is given to demonstrate the effectiveness of the proposed approaches.

  18. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  19. Technical note: Simultaneous fully dynamic characterization of multiple input–output relationships in climate models

    DOE PAGES

    Kravitz, Ben; MacMartin, Douglas G.; Rasch, Philip J.; ...

    2017-02-17

    We introduce system identification techniques to climate science wherein multiple dynamic input–output relationships can be simultaneously characterized in a single simulation. This method, involving multiple small perturbations (in space and time) of an input field while monitoring output fields to quantify responses, allows for identification of different timescales of climate response to forcing without substantially pushing the climate far away from a steady state. We use this technique to determine the steady-state responses of low cloud fraction and latent heat flux to heating perturbations over 22 regions spanning Earth's oceans. We show that the response characteristics are similar to thosemore » of step-change simulations, but in this new method the responses for 22 regions can be characterized simultaneously. Moreover, we can estimate the timescale over which the steady-state response emerges. The proposed methodology could be useful for a wide variety of purposes in climate science, including characterization of teleconnections and uncertainty quantification to identify the effects of climate model tuning parameters.« less

  20. Method of operating a thermoelectric generator

    DOEpatents

    Reynolds, Michael G; Cowgill, Joshua D

    2013-11-05

    A method for operating a thermoelectric generator supplying a variable-load component includes commanding the variable-load component to operate at a first output and determining a first load current and a first load voltage to the variable-load component while operating at the commanded first output. The method also includes commanding the variable-load component to operate at a second output and determining a second load current and a second load voltage to the variable-load component while operating at the commanded second output. The method includes calculating a maximum power output of the thermoelectric generator from the determined first load current and voltage and the determined second load current and voltage, and commanding the variable-load component to operate at a third output. The commanded third output is configured to draw the calculated maximum power output from the thermoelectric generator.

  1. Improved system integration for integrated gasification combined cycle (IGCC) systems.

    PubMed

    Frey, H Christopher; Zhu, Yunhua

    2006-03-01

    Integrated gasification combined cycle (IGCC) systems are a promising technology for power generation. They include an air separation unit (ASU), a gasification system, and a gas turbine combined cycle power block, and feature competitive efficiency and lower emissions compared to conventional power generation technology. IGCC systems are not yet in widespread commercial use and opportunities remain to improve system feasibility via improved process integration. A process simulation model was developed for IGCC systems with alternative types of ASU and gas turbine integration. The model is applied to evaluate integration schemes involving nitrogen injection, air extraction, and combinations of both, as well as different ASU pressure levels. The optimal nitrogen injection only case in combination with an elevated pressure ASU had the highest efficiency and power output and approximately the lowest emissions per unit output of all cases considered, and thus is a recommended design option. The optimal combination of air extraction coupled with nitrogen injection had slightly worse efficiency, power output, and emissions than the optimal nitrogen injection only case. Air extraction alone typically produced lower efficiency, lower power output, and higher emissions than all other cases. The recommended nitrogen injection only case is estimated to provide annualized cost savings compared to a nonintegrated design. Process simulation modeling is shown to be a useful tool for evaluation and screening of technology options.

  2. Linking the Weather Generator with Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, Martin; Farda, Ales; Skalak, Petr; Huth, Radan

    2013-04-01

    One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking the stochastic weather generator with the climate model output. The present contribution, in which the parametric daily surface weather generator (WG) M&Rfi is linked to the RCM output, follows two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate Regional Climate Model at 25 km resolution. The WG parameters are derived from the RCM-simulated surface weather series and compared to those derived from weather series observed in 125 Czech meteorological stations. The set of WG parameters will include statistics of the surface temperature and precipitation series (including probability of wet day occurrence). (2) Presenting a methodology for linking the WG with RCM output. This methodology, which is based on merging information from observations and RCM, may be interpreted as a downscaling procedure, whose product is a gridded WG capable of producing realistic synthetic multivariate weather series for weather-ungauged locations. In this procedure, WG is calibrated with RCM-simulated multi-variate weather series in the first step, and the grid specific WG parameters are then de-biased by spatially interpolated correction factors based on comparison of WG parameters calibrated with gridded RCM weather series and spatially scarcer observations. The quality of the weather series produced by the resultant gridded WG will be assessed in terms of selected climatic characteristics (focusing on characteristics related to variability and extremes of surface temperature and precipitation). Acknowledgements: The present experiment is made within the frame of projects ALARO-Climate (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports of CR) and VALUE (COST ES 1102 action).

  3. Regional statistical assessment of WRF-Hydro and IFC Model stream Flow uncertainties over the State of Iowa

    NASA Astrophysics Data System (ADS)

    ElSaadani, M.; Quintero, F.; Goska, R.; Krajewski, W. F.; Lahmers, T.; Small, S.; Gochis, D. J.

    2015-12-01

    This study examines the performance of different Hydrologic models in estimating peak flows over the state of Iowa. In this study I will compare the output of the Iowa Flood Center (IFC) hydrologic model and WRF-Hydro (NFIE configuration) to the observed flows at the USGS stream gauges. During the National Flood Interoperability Experiment I explored the performance of WRF-Hydro over the state of Iowa using different rainfall products and the resulting hydrographs showed a "flashy" behavior of the model output due to lack of calibration and bad initial flows due to short model spin period. I would like to expand this study by including a second well established hydrologic model and include more rain gauge vs. radar rainfall direct comparisons. The IFC model is expected to outperform WRF-Hydro's out of the box results, however, I will test different calibration options for both the Noah-MP land surface model and RAPID, which is the routing component of the NFIE-Hydro configuration, to see if this will improve the model results. This study will explore the statistical structure of model output uncertainties across scales (as a function of drainage areas and/or stream orders). I will also evaluate the performance of different radar-based Quantitative Precipitation Estimation (QPE) products (e.g. Stage IV, MRMS and IFC's NEXRAD based radar rainfall product. Different basins will be evaluated in this study and they will be selected based on size, amount of rainfall received over the basin area and location. Basin location will be an important factor in this study due to our prior knowledge of the performance of different NEXRAD radars that cover the region, this will help observe the effect of rainfall biases on stream flows. Another possible addition to this study is to apply controlled spatial error fields to rainfall inputs and observer the propagation of these errors through the stream network.

  4. Advanced optical simulation of scintillation detectors in GATE V8.0: first implementation of a reflectance model based on measured data

    NASA Astrophysics Data System (ADS)

    Stockhoff, Mariele; Jan, Sebastien; Dubois, Albertine; Cherry, Simon R.; Roncali, Emilie

    2017-06-01

    Typical PET detectors are composed of a scintillator coupled to a photodetector that detects scintillation photons produced when high energy gamma photons interact with the crystal. A critical performance factor is the collection efficiency of these scintillation photons, which can be optimized through simulation. Accurate modelling of photon interactions with crystal surfaces is essential in optical simulations, but the existing UNIFIED model in GATE is often inaccurate, especially for rough surfaces. Previously a new approach for modelling surface reflections based on measured surfaces was validated using custom Monte Carlo code. In this work, the LUT Davis model is implemented and validated in GATE and GEANT4, and is made accessible for all users in the nuclear imaging research community. Look-up-tables (LUTs) from various crystal surfaces are calculated based on measured surfaces obtained by atomic force microscopy. The LUTs include photon reflection probabilities and directions depending on incidence angle. We provide LUTs for rough and polished surfaces with different reflectors and coupling media. Validation parameters include light output measured at different depths of interaction in the crystal and photon track lengths, as both parameters are strongly dependent on reflector characteristics and distinguish between models. Results from the GATE/GEANT4 beta version are compared to those from our custom code and experimental data, as well as the UNIFIED model. GATE simulations with the LUT Davis model show average variations in light output of  <2% from the custom code and excellent agreement for track lengths with R 2  >  0.99. Experimental data agree within 9% for relative light output. The new model also simplifies surface definition, as no complex input parameters are needed. The LUT Davis model makes optical simulations for nuclear imaging detectors much more precise, especially for studies with rough crystal surfaces. It will be available in GATE V8.0.

  5. Evaluation and Application of Gridded Snow Water Equivalent Products for Improving Snowmelt Flood Predictions in the Red River Basin of the North

    NASA Astrophysics Data System (ADS)

    Schroeder, R.; Jacobs, J. M.; Vuyovich, C.; Cho, E.; Tuttle, S. E.

    2017-12-01

    Each spring the Red River basin (RRB) of the North, located between the states of Minnesota and North Dakota and southern Manitoba, is vulnerable to dangerous spring snowmelt floods. Flat terrain, low permeability soils and a lack of satisfactory ground observations of snow pack conditions make accurate predictions of the onset and magnitude of major spring flood events in the RRB very challenging. This study investigated the potential benefit of using gridded snow water equivalent (SWE) products from passive microwave satellite missions and model output simulations to improve snowmelt flood predictions in the RRB using NOAA's operational Community Hydrologic Prediction System (CHPS). Level-3 satellite SWE products from AMSR-E, AMSR2 and SSM/I, as well as SWE computed from Level-2 brightness temperatures (Tb) measurements, including model output simulations of SWE from SNODAS and GlobSnow-2 were chosen to support the snowmelt modeling exercises. SWE observations were aggregated spatially (i.e. to the NOAA North Central River Forecast Center forecast basins) and temporally (i.e. by obtaining daily screened and weekly unscreened maximum SWE composites) to assess the value of daily satellite SWE observations relative to weekly maximums. Data screening methods removed the impacts of snow melt and cloud contamination on SWE and consisted of diurnal SWE differences and a temperature-insensitive polarization difference ratio, respectively. We examined the ability of the satellite and model output simulations to capture peak SWE and investigated temporal accuracies of screened and unscreened satellite and model output SWE. The resulting SWE observations were employed to update the SNOW-17 snow accumulation and ablation model of CHPS to assess the benefit of using temporally and spatially consistent SWE observations for snow melt predictions in two test basins in the RRB.

  6. A quantum causal discovery algorithm

    NASA Astrophysics Data System (ADS)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  7. The spatial structure of a nonlinear receptive field.

    PubMed

    Schwartz, Gregory W; Okawa, Haruhisa; Dunn, Felice A; Morgan, Josh L; Kerschensteiner, Daniel; Wong, Rachel O; Rieke, Fred

    2012-11-01

    Understanding a sensory system implies the ability to predict responses to a variety of inputs from a common model. In the retina, this includes predicting how the integration of signals across visual space shapes the outputs of retinal ganglion cells. Existing models of this process generalize poorly to predict responses to new stimuli. This failure arises in part from properties of the ganglion cell response that are not well captured by standard receptive-field mapping techniques: nonlinear spatial integration and fine-scale heterogeneities in spatial sampling. Here we characterize a ganglion cell's spatial receptive field using a mechanistic model based on measurements of the physiological properties and connectivity of only the primary excitatory circuitry of the retina. The resulting simplified circuit model successfully predicts ganglion-cell responses to a variety of spatial patterns and thus provides a direct correspondence between circuit connectivity and retinal output.

  8. Analysis performance of proton exchange membrane fuel cell (PEMFC)

    NASA Astrophysics Data System (ADS)

    Mubin, A. N. A.; Bahrom, M. H.; Azri, M.; Ibrahim, Z.; Rahim, N. A.; Raihan, S. R. S.

    2017-06-01

    Recently, the proton exchange membrane fuel cell (PEMFC) has gained much attention to the technology of renewable energy due to its mechanically ideal and zero emission power source. PEMFC performance reflects from the surroundings such as temperature and pressure. This paper presents an analysis of the performance of the PEMFC by developing the mathematical thermodynamic modelling using Matlab/Simulink. Apart from that, the differential equation of the thermodynamic model of the PEMFC is used to explain the contribution of heat to the performance of the output voltage of the PEMFC. On the other hand, the partial pressure equation of the hydrogen is included in the PEMFC mathematical modeling to study the PEMFC voltage behaviour related to the input variable input hydrogen pressure. The efficiency of the model is 33.8% which calculated by applying the energy conversion device equations on the thermal efficiency. PEMFC’s voltage output performance is increased by increasing the hydrogen input pressure and temperature.

  9. RM-CLEAN: RM spectra cleaner

    NASA Astrophysics Data System (ADS)

    Heald, George

    2017-08-01

    RM-CLEAN reads in dirty Q and U cubes, generates rmtf based on the frequencies given in an ASCII file, and cleans the RM spectra following the algorithm given by Brentjens (2007). The output cubes contain the clean model components and the CLEANed RM spectra. The input cubes must be reordered with mode=312, and the output cubes will have the same ordering and thus must be reordered after being written to disk. RM-CLEAN runs as a MIRIAD (ascl:1106.007) task and a Python wrapper is included with the code.

  10. UNRES server for physics-based coarse-grained simulations and prediction of protein structure, dynamics and thermodynamics.

    PubMed

    Czaplewski, Cezary; Karczynska, Agnieszka; Sieradzan, Adam K; Liwo, Adam

    2018-04-30

    A server implementation of the UNRES package (http://www.unres.pl) for coarse-grained simulations of protein structures with the physics-based UNRES model, coined a name UNRES server, is presented. In contrast to most of the protein coarse-grained models, owing to its physics-based origin, the UNRES force field can be used in simulations, including those aimed at protein-structure prediction, without ancillary information from structural databases; however, the implementation includes the possibility of using restraints. Local energy minimization, canonical molecular dynamics simulations, replica exchange and multiplexed replica exchange molecular dynamics simulations can be run with the current UNRES server; the latter are suitable for protein-structure prediction. The user-supplied input includes protein sequence and, optionally, restraints from secondary-structure prediction or small x-ray scattering data, and simulation type and parameters which are selected or typed in. Oligomeric proteins, as well as those containing D-amino-acid residues and disulfide links can be treated. The output is displayed graphically (minimized structures, trajectories, final models, analysis of trajectory/ensembles); however, all output files can be downloaded by the user. The UNRES server can be freely accessed at http://unres-server.chem.ug.edu.pl.

  11. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  12. [Ecological management model of agriculture-pasture ecotone based on the theory of energy and material flow--a case study in Houshan dryland area of Inner Mongolia].

    PubMed

    Fan, Jinlong; Pan, Zhihua; Zhao, Ju; Zheng, Dawei; Tuo, Debao; Zhao, Peiyi

    2004-04-01

    The degradation of ecological environment in the agriculture-pasture ecotone in northern China has been paid more attentions. Based on our many years' research and under the guide of energy and material flow theory, this paper put forward an ecological management model, with a hill as the basic cell and according to the natural, social and economic characters of Houshan dryland farming area inside the north agriculture-pasture ecotone. The input and output of three models, i.e., the traditional along-slope-tillage model, the artificial grassland model and the ecological management model, were observed and recorded in detail in 1999. Energy and material flow analysis based on field test showed that compared with traditional model, ecological management model could increase solar use efficiency by 8.3%, energy output by 8.7%, energy conversion efficiency by 19.4%, N output by 26.5%, N conversion efficiency by 57.1%, P output by 12.1%, P conversion efficiency by 45.0%, and water use efficiency by 17.7%. Among the models, artificial grassland model had the lowest solar use efficiency, energy output and energy conversion efficiency; while the ecological management model had the most outputs and benefits, was the best model with high economic effect, and increased economic benefits by 16.1%, compared with the traditional model.

  13. A Spectral Method for Spatial Downscaling

    PubMed Central

    Reich, Brian J.; Chang, Howard H.; Foley, Kristen M.

    2014-01-01

    Summary Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this article, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. PMID:24965037

  14. Applying Monte Carlo Simulation to Launch Vehicle Design and Requirements Analysis

    NASA Technical Reports Server (NTRS)

    Hanson, J. M.; Beard, B. B.

    2010-01-01

    This Technical Publication (TP) is meant to address a number of topics related to the application of Monte Carlo simulation to launch vehicle design and requirements analysis. Although the focus is on a launch vehicle application, the methods may be applied to other complex systems as well. The TP is organized so that all the important topics are covered in the main text, and detailed derivations are in the appendices. The TP first introduces Monte Carlo simulation and the major topics to be discussed, including discussion of the input distributions for Monte Carlo runs, testing the simulation, how many runs are necessary for verification of requirements, what to do if results are desired for events that happen only rarely, and postprocessing, including analyzing any failed runs, examples of useful output products, and statistical information for generating desired results from the output data. Topics in the appendices include some tables for requirements verification, derivation of the number of runs required and generation of output probabilistic data with consumer risk included, derivation of launch vehicle models to include possible variations of assembled vehicles, minimization of a consumable to achieve a two-dimensional statistical result, recontact probability during staging, ensuring duplicated Monte Carlo random variations, and importance sampling.

  15. The Lake Tahoe Basin Land Use Simulation Model

    USGS Publications Warehouse

    Forney, William M.; Oldham, I. Benson

    2011-01-01

    This U.S. Geological Survey Open-File Report describes the final modeling product for the Tahoe Decision Support System project for the Lake Tahoe Basin funded by the Southern Nevada Public Land Management Act and the U.S. Geological Survey's Geographic Analysis and Monitoring Program. This research was conducted by the U.S. Geological Survey Western Geographic Science Center. The purpose of this report is to describe the basic elements of the novel Lake Tahoe Basin Land Use Simulation Model, publish samples of the data inputs, basic outputs of the model, and the details of the Python code. The results of this report include a basic description of the Land Use Simulation Model, descriptions and summary statistics of model inputs, two figures showing the graphical user interface from the web-based tool, samples of the two input files, seven tables of basic output results from the web-based tool and descriptions of their parameters, and the fully functional Python code.

  16. PV_LIB Toolbox v. 1.3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-12-09

    PV_LIB comprises a library of Matlab? code for modeling photovoltaic (PV) systems. Included are functions to compute solar position and to estimate irradiance in the PV system's plane of array, cell temperature, PV module electrical output, and conversion from DC to AC power. Also included are functions that aid in determining parameters for module performance models from module characterization testing. PV_LIB is open source code primarily intended for research and academic purposes. All algorithms are documented in openly available literature with the appropriate references included in comments within the code.

  17. Use of regional climate model output for hydrologic simulations

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.

    2002-01-01

    Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  18. Assessing the Operational Robustness of the Homer Model for Marine Corps Use in Expeditionary Environments

    DTIC Science & Technology

    2014-06-01

    systems. It can model systems including both conventional, diesel powered generators and renewable power sources such as photovoltaic arrays and wind...conducted an experiment where he assessed the capabilities of the HOMER model in forecasting the power output of a solar panel at NPS [32]. In his ex...energy efficiency in expeditionary operations, the HOMER micropower optimization model provides potential to serve as a powerful tool for improving

  19. Mathematical Modeling of Physical and Cognitive Performance Decrement from Mechanical and Inhalation Insults

    DTIC Science & Technology

    2006-12-01

    on specific short term problems. 1.1.1 Dynamic Physiological Modeling The oxygenation of the blood by the lung through respiration is a critical...tests as apnea , reduced arterial saturation, and may even be linked to long term CNS deficits. Inhalation of toxic gases can dramatically affect the...of TGAS model the respiration , circulation, and metabolic processes and include models of the ventilation and cardiac output control due to 3

  20. The output voltage model and experiment of magnetostrictive displacement sensor based on Weidemann effect

    NASA Astrophysics Data System (ADS)

    Wang, Bowen; Li, Yuanyuan; Xie, Xinliang; Huang, Wenmei; Weng, Ling; Zhang, Changgeng

    2018-05-01

    Based on the Wiedemann effect and inverse magnetostritive effect, the output voltage model of a magnetostrictive displacement sensor has been established. The output voltage of the magnetostrictive displacement sensor is calculated in different magnetic fields. It is found that the calculating result is in an agreement with the experimental one. The theoretical and experimental results show that the output voltage of the displacement sensor is linearly related to the magnetostrictive differences, (λl-λt), of waveguide wires. The measured output voltages for Fe-Ga and Fe-Ni wire sensors are 51.5mV and 36.5mV, respectively, and the output voltage of Fe-Ga wire sensor is obviously higher than that of Fe-Ni wire sensor under the same magnetic field. The model can be used to predict the output voltage of the sensor and to provide guidance for the optimization design of the sensor.

  1. Using the split Hopkinson pressure bar to validate material models.

    PubMed

    Church, Philip; Cornish, Rory; Cullis, Ian; Gould, Peter; Lewtas, Ian

    2014-08-28

    This paper gives a discussion of the use of the split-Hopkinson bar with particular reference to the requirements of materials modelling at QinetiQ. This is to deploy validated material models for numerical simulations that are physically based and have as little characterization overhead as possible. In order to have confidence that the models have a wide range of applicability, this means, at most, characterizing the models at low rate and then validating them at high rate. The split Hopkinson pressure bar (SHPB) is ideal for this purpose. It is also a very useful tool for analysing material behaviour under non-shock wave loading. This means understanding the output of the test and developing techniques for reliable comparison of simulations with SHPB data. For materials other than metals comparison with an output stress v strain curve is not sufficient as the assumptions built into the classical analysis are generally violated. The method described in this paper compares the simulations with as much validation data as can be derived from deployed instrumentation including the raw strain gauge data on the input and output bars, which avoids any assumptions about stress equilibrium. One has to take into account Pochhammer-Chree oscillations and their effect on the specimen and recognize that this is itself also a valuable validation test of the material model. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  2. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shepard, Kenneth L.; Sturcken, Noah Andrew

    Power controller includes an output terminal having an output voltage, at least one clock generator to generate a plurality of clock signals and a plurality of hardware phases. Each hardware phase is coupled to the at least one clock generator and the output terminal and includes a comparator. Each hardware phase is configured to receive a corresponding one of the plurality of clock signals and a reference voltage, combine the corresponding clock signal and the reference voltage to produce a reference input, generate a feedback voltage based on the output voltage, compare the reference input and the feedback voltage usingmore » the comparator and provide a comparator output to the output terminal, whereby the comparator output determines a duty cycle of the power controller. An integrated circuit including the power controller is also provided.« less

  4. Reduced order models for assessing CO 2 impacts in shallow unconfined aquifers

    DOE PAGES

    Keating, Elizabeth H.; Harp, Dylan H.; Dai, Zhenxue; ...

    2016-01-28

    Risk assessment studies of potential CO 2 sequestration projects consider many factors, including the possibility of brine and/or CO 2 leakage from the storage reservoir. Detailed multiphase reactive transport simulations have been developed to predict the impact of such leaks on shallow groundwater quality; however, these simulations are computationally expensive and thus difficult to directly embed in a probabilistic risk assessment analysis. Here we present a process for developing computationally fast reduced-order models which emulate key features of the more detailed reactive transport simulations. A large ensemble of simulations that take into account uncertainty in aquifer characteristics and CO 2/brinemore » leakage scenarios were performed. Twelve simulation outputs of interest were used to develop response surfaces (RSs) using a MARS (multivariate adaptive regression splines) algorithm (Milborrow, 2015). A key part of this study is to compare different measures of ROM accuracy. We then show that for some computed outputs, MARS performs very well in matching the simulation data. The capability of the RS to predict simulation outputs for parameter combinations not used in RS development was tested using cross-validation. Again, for some outputs, these results were quite good. For other outputs, however, the method performs relatively poorly. Performance was best for predicting the volume of depressed-pH-plumes, and was relatively poor for predicting organic and trace metal plume volumes. We believe several factors, including the non-linearity of the problem, complexity of the geochemistry, and granularity in the simulation results, contribute to this varied performance. The reduced order models were developed principally to be used in probabilistic performance analysis where a large range of scenarios are considered and ensemble performance is calculated. We demonstrate that they effectively predict the ensemble behavior. But, the performance of the RSs is much less accurate when used to predict time-varying outputs from a single simulation. If an analysis requires only a small number of scenarios to be investigated, computationally expensive physics-based simulations would likely provide more reliable results. Finally, if the aggregate behavior of a large number of realizations is the focus, as will be the case in probabilistic quantitative risk assessment, the methodology presented here is relatively robust.« less

  5. A simple, generalizable method for measuring individual research productivity and its use in the long-term analysis of departmental performance, including between-country comparisons

    PubMed Central

    2013-01-01

    Background A simple, generalizable method for measuring research output would be useful in attempts to build research capacity, and in other contexts. Methods A simple indicator of individual research output was developed, based on grant income, publications and numbers of PhD students supervised. The feasibility and utility of the indicator was examined by using it to calculate research output from two similarly-sized research groups in different countries. The same indicator can be used to assess the balance in the research “portfolio” of an individual researcher. Results Research output scores of 41 staff in Research Department A had a wide range, from zero to 8; the distribution of these scores was highly skewed. Only about 20% of the researchers had well-balanced research outputs, with approximately equal contributions from grants, papers and supervision. Over a five-year period, Department A's total research output rose, while the number of research staff decreased slightly, in other words research productivity (output per head) rose. Total research output from Research Department B, of approximately the same size as A, was similar, but slightly higher than Department A. Conclusions The proposed indicator is feasible. The output score is dimensionless and can be used for comparisons within and between countries. Modeling can be used to explore the effect on research output of changing the size and composition of a research department. A sensitivity analysis shows that small increases in individual productivity result in relatively greater increases in overall departmental research output. The indicator appears to be potentially useful for capacity building, once the initial step of research priority setting has been completed. PMID:23317431

  6. A simple, generalizable method for measuring individual research productivity and its use in the long-term analysis of departmental performance, including between-country comparisons.

    PubMed

    Wootton, Richard

    2013-01-14

    A simple, generalizable method for measuring research output would be useful in attempts to build research capacity, and in other contexts. A simple indicator of individual research output was developed, based on grant income, publications and numbers of PhD students supervised. The feasibility and utility of the indicator was examined by using it to calculate research output from two similarly-sized research groups in different countries. The same indicator can be used to assess the balance in the research "portfolio" of an individual researcher. Research output scores of 41 staff in Research Department A had a wide range, from zero to 8; the distribution of these scores was highly skewed. Only about 20% of the researchers had well-balanced research outputs, with approximately equal contributions from grants, papers and supervision. Over a five-year period, Department A's total research output rose, while the number of research staff decreased slightly, in other words research productivity (output per head) rose. Total research output from Research Department B, of approximately the same size as A, was similar, but slightly higher than Department A. The proposed indicator is feasible. The output score is dimensionless and can be used for comparisons within and between countries. Modeling can be used to explore the effect on research output of changing the size and composition of a research department. A sensitivity analysis shows that small increases in individual productivity result in relatively greater increases in overall departmental research output. The indicator appears to be potentially useful for capacity building, once the initial step of research priority setting has been completed.

  7. An EKV-based high voltage MOSFET model with improved mobility and drift model

    NASA Astrophysics Data System (ADS)

    Chauhan, Yogesh Singh; Gillon, Renaud; Bakeroot, Benoit; Krummenacher, Francois; Declercq, Michel; Ionescu, Adrian Mihai

    2007-11-01

    An EKV-based high voltage MOSFET model is presented. The intrinsic channel model is derived based on the charge based EKV-formalism. An improved mobility model is used for the modeling of the intrinsic channel to improve the DC characteristics. The model uses second order dependence on the gate bias and an extra parameter for the smoothening of the saturation voltage of the intrinsic drain. An improved drift model [Chauhan YS, Anghel C, Krummenacher F, Ionescu AM, Declercq M, Gillon R, et al. A highly scalable high voltage MOSFET model. In: IEEE European solid-state device research conference (ESSDERC), September 2006. p. 270-3; Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] is used for the modeling of the drift region, which gives smoother transition on output characteristics and also models well the quasi-saturation region of high voltage MOSFETs. First, the model is validated on the numerical device simulation of the VDMOS transistor and then, on the measured characteristics of the SOI-LDMOS transistor. The accuracy of the model is better than our previous model [Chauhan YS, Anghel C, Krummenacher F, Maier C, Gillon R, Bakeroot B, et al. Scalable general high voltage MOSFET model including quasi-saturation and self-heating effect. Solid State Electron 2006;50(11-12):1801-13] especially in the quasi-saturation region of output characteristics.

  8. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  9. Development and testing of controller performance evaluation methodology for multi-input/multi-output digital control systems

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony; Wieseman, Carol; Hoadley, Sherwood Tiffany; Mukhopadhyay, Vivek

    1991-01-01

    Described here is the development and implementation of on-line, near real time controller performance evaluation (CPE) methods capability. Briefly discussed are the structure of data flow, the signal processing methods used to process the data, and the software developed to generate the transfer functions. This methodology is generic in nature and can be used in any type of multi-input/multi-output (MIMO) digital controller application, including digital flight control systems, digitally controlled spacecraft structures, and actively controlled wind tunnel models. Results of applying the CPE methodology to evaluate (in near real time) MIMO digital flutter suppression systems being tested on the Rockwell Active Flexible Wing (AFW) wind tunnel model are presented to demonstrate the CPE capability.

  10. Real-Time Kennedy Space Center and Cape Canaveral Air Force Station High-Resolution Model Implementation and Verification

    NASA Technical Reports Server (NTRS)

    Shafer, Jaclyn; Watson, Leela R.

    2015-01-01

    NASA's Launch Services Program, Ground Systems Development and Operations, Space Launch System and other programs at Kennedy Space Center (KSC) and Cape Canaveral Air Force Station (CCAFS) use the daily and weekly weather forecasts issued by the 45th Weather Squadron (45 WS) as decision tools for their day-to-day and launch operations on the Eastern Range (ER). Examples include determining if they need to limit activities such as vehicle transport to the launch pad, protect people, structures or exposed launch vehicles given a threat of severe weather, or reschedule other critical operations. The 45 WS uses numerical weather prediction models as a guide for these weather forecasts, particularly the Air Force Weather Agency (AFWA) 1.67 km Weather Research and Forecasting (WRF) model. Considering the 45 WS forecasters' and Launch Weather Officers' (LWO) extensive use of the AFWA model, the 45 WS proposed a task at the September 2013 Applied Meteorology Unit (AMU) Tasking Meeting requesting the AMU verify this model. Due to the lack of archived model data available from AFWA, verification is not yet possible. Instead, the AMU proposed to implement and verify the performance of an ER version of the high-resolution WRF Environmental Modeling System (EMS) model configured by the AMU (Watson 2013) in real time. Implementing a real-time version of the ER WRF-EMS would generate a larger database of model output than in the previous AMU task for determining model performance, and allows the AMU more control over and access to the model output archive. The tasking group agreed to this proposal; therefore the AMU implemented the WRF-EMS model on the second of two NASA AMU modeling clusters. The AMU also calculated verification statistics to determine model performance compared to observational data. Finally, the AMU made the model output available on the AMU Advanced Weather Interactive Processing System II (AWIPS II) servers, which allows the 45 WS and AMU staff to customize the model output display on the AMU and Range Weather Operations (RWO) AWIPS II client computers and conduct real-time subjective analyses.

  11. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  12. Parameterization of Forest Canopies with the PROSAIL Model

    NASA Astrophysics Data System (ADS)

    Austerberry, M. J.; Grigsby, S.; Ustin, S.

    2013-12-01

    Particularly in forested environments, arboreal characteristics such as Leaf Area Index (LAI) and Leaf Inclination Angle have a large impact on the spectral characteristics of reflected radiation. The reflected spectrum can be measured directly with satellites or airborne instruments, including the MASTER and AVIRIS instruments. This particular project dealt with spectral analysis of reflected light as measured by AVIRIS compared to tree measurements taken from the ground. Chemical properties of leaves including pigment concentrations and moisture levels were also measured. The leaf data was combined with the chemical properties of three separate trees, and served as input data for a sequence of simulations with the PROSAIL Model, a combination of PROSPECT and Scattering by Arbitrarily Inclined Leaves (SAIL) simulations. The output was a computed reflectivity spectrum, which corresponded to the spectra that were directly measured by AVIRIS for the three trees' exact locations within a 34-meter pixel resolution. The input data that produced the best-correlating spectral output was then cross-referenced with LAI values that had been obtained through two entirely separate methods, NDVI extraction and use of the Beer-Lambert law with airborne LiDAR. Examination with regressive techniques between the measured and modeled spectra then enabled a determination of the trees' probable structure and leaf parameters. Highly-correlated spectral output corresponded well to specific values of LAI and Leaf Inclination Angle. Interestingly, it appears that varying Leaf Angle Distribution has little or no noticeable effect on the PROSAIL model. Not only is the effectiveness and accuracy of the PROSAIL model evaluated, but this project is a precursor to direct measurement of vegetative indices exclusively from airborne or satellite observation.

  13. Laboratory modeling and analysis of aircraft-lightning interactions

    NASA Technical Reports Server (NTRS)

    Turner, C. D.; Trost, T. F.

    1982-01-01

    Modeling studies of the interaction of a delta wing aircraft with direct lightning strikes were carried out using an approximate scale model of an F-106B. The model, which is three feet in length, is subjected to direct injection of fast current pulses supplied by wires, which simulate the lightning channel and are attached at various locations on the model. Measurements are made of the resulting transient electromagnetic fields using time derivative sensors. The sensor outputs are sampled and digitized by computer. The noise level is reduced by averaging the sensor output from ten input pulses at each sample time. Computer analysis of the measured fields includes Fourier transformation and the computation of transfer functions for the model. Prony analysis is also used to determine the natural frequencies of the model. Comparisons of model natural frequencies extracted by Prony analysis with those for in flight direct strike data usually show lower damping in the in flight case. This is indicative of either a lightning channel with a higher impedance than the wires on the model, only one attachment point, or short streamers instead of a long channel.

  14. A continuous-time neural model for sequential action.

    PubMed

    Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard

    2014-11-05

    Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  15. A digital model for planning water management at Benton Lake National Wildlife Refuge, west-central Montana

    USGS Publications Warehouse

    Nimick, David A.; McCarthy, Peter M.; Fields, Vanessa

    2011-01-01

    Benton Lake National Wildlife Refuge is an important area for waterfowl production and migratory stopover in west-central Montana. Eight wetland units covering about 5,600 acres are the essential features of the refuge. Water availability for the wetland units can be uncertain owing to the large natural variations in precipitation and runoff and the high cost of pumping supplemental water. The U.S. Geological Survey, in cooperation with the U.S. Fish and Wildlife Service, has developed a digital model for planning water management. The model can simulate strategies for water transfers among the eight wetland units and account for variability in runoff and pumped water. This report describes this digital model, which uses a water-accounting spreadsheet to track inputs and outputs to each of the wetland units of Benton Lake National Wildlife Refuge. Inputs to the model include (1) monthly values for precipitation, pumped water, runoff, and evaporation; (2) water-level/capacity data for each wetland unit; and (3) the pan-evaporation coefficient. Outputs include monthly water volume and flooded surface area for each unit for as many as 5 consecutive years. The digital model was calibrated by comparing simulated and historical measured water volumes for specific test years.

  16. Efficiency measurement of health care organizations: What models are used?

    PubMed Central

    Jaafaripooyan, Ebrahim; Emamgholipour, Sara; Raei, Behzad

    2017-01-01

    Background: Literature abounds with various techniques for efficiency measurement of health care organizations (HCOs), which should be used cautiously and appropriately. The present study aimed at discovering the rules regulating the interplay among the number of inputs, outputs, and decision- making units (DMUs) and identifying all methods used for the measurement of Iranian HCOs and critically appraising all DEA studies on Iranian HCOs in their application of such rules. Methods: The present study employed a systematic search of all studies related to efficiency measurement of Iranian HCOs. A search was conducted in different databases such as PubMed and Scopus between 2001 and 2015 to identify the studies related to the measurement in health care. The retrieved studies passed through a multi-stage (title, abstract, body) filtering process. Data extraction table for each study was completed and included method, number of inputs and outputs, DMUs, and their efficiency score. Results: Various methods were found for efficiency measurement. Overall, 122 studies were retrieved, of which 73 had exclusively employed DEA technique for measuring the efficiency of HCOs in Iran, and 23 with hybrid models (including DEA). Only 6 studies had explicitly used the rules of thumb. Conclusion: The number of inputs, outputs, and DMUs should be cautiously selected in DEA like techniques, as their proportionality can directly affect the discriminatory power of the technique. The given literature seemed to be, to a large extent, unsuccessful in attending to such proportionality. This study collected a list of key rules (of thumb) on the interplay of inputs, outputs, and DMUs, which could be considered by most researchers keen to apply DEA technique.

  17. Analyzing Power Supply and Demand on the ISS

    NASA Technical Reports Server (NTRS)

    Thomas, Justin; Pham, Tho; Halyard, Raymond; Conwell, Steve

    2006-01-01

    Station Power and Energy Evaluation Determiner (SPEED) is a Java application program for analyzing the supply and demand aspects of the electrical power system of the International Space Station (ISS). SPEED can be executed on any computer that supports version 1.4 or a subsequent version of the Java Runtime Environment. SPEED includes an analysis module, denoted the Simplified Battery Solar Array Model, which is a simplified engineering model of the ISS primary power system. This simplified model makes it possible to perform analyses quickly. SPEED also includes a user-friendly graphical-interface module, an input file system, a parameter-configuration module, an analysis-configuration-management subsystem, and an output subsystem. SPEED responds to input information on trajectory, shadowing, attitude, and pointing in either a state-of-charge mode or a power-availability mode. In the state-of-charge mode, SPEED calculates battery state-of-charge profiles, given a time-varying power-load profile. In the power-availability mode, SPEED determines the time-varying total available solar array and/or battery power output, given a minimum allowable battery state of charge.

  18. Current Source Logic Gate

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J. (Inventor); Prokop, Norman F. (Inventor)

    2017-01-01

    A current source logic gate with depletion mode field effect transistor ("FET") transistors and resistors may include a current source, a current steering switch input stage, and a resistor divider level shifting output stage. The current source may include a transistor and a current source resistor. The current steering switch input stage may include a transistor to steer current to set an output stage bias point depending on an input logic signal state. The resistor divider level shifting output stage may include a first resistor and a second resistor to set the output stage point and produce valid output logic signal states. The transistor of the current steering switch input stage may function as a switch to provide at least two operating points.

  19. Predicting the synaptic information efficacy in cortical layer 5 pyramidal neurons using a minimal integrate-and-fire model.

    PubMed

    London, Michael; Larkum, Matthew E; Häusser, Michael

    2008-11-01

    Synaptic information efficacy (SIE) is a statistical measure to quantify the efficacy of a synapse. It measures how much information is gained, on the average, about the output spike train of a postsynaptic neuron if the input spike train is known. It is a particularly appropriate measure for assessing the input-output relationship of neurons receiving dynamic stimuli. Here, we compare the SIE of simulated synaptic inputs measured experimentally in layer 5 cortical pyramidal neurons in vitro with the SIE computed from a minimal model constructed to fit the recorded data. We show that even with a simple model that is far from perfect in predicting the precise timing of the output spikes of the real neuron, the SIE can still be accurately predicted. This arises from the ability of the model to predict output spikes influenced by the input more accurately than those driven by the background current. This indicates that in this context, some spikes may be more important than others. Lastly we demonstrate another aspect where using mutual information could be beneficial in evaluating the quality of a model, by measuring the mutual information between the model's output and the neuron's output. The SIE, thus, could be a useful tool for assessing the quality of models of single neurons in preserving input-output relationship, a property that becomes crucial when we start connecting these reduced models to construct complex realistic neuronal networks.

  20. Global and regional ecosystem modeling: comparison of model outputs and field measurements

    NASA Astrophysics Data System (ADS)

    Olson, R. J.; Hibbard, K.

    2003-04-01

    The Ecosystem Model-Data Intercomparison (EMDI) Workshops provide a venue for global ecosystem modeling groups to compare model outputs against measurements of net primary productivity (NPP). The objective of EMDI Workshops is to evaluate model performance relative to observations in order to improve confidence in global model projections terrestrial carbon cycling. The questions addressed by EMDI include: How does the simulated NPP compare with the field data across biome and environmental gradients? How sensitive are models to site-specific climate? Does additional mechanistic detail in models result in a better match with field measurements? How useful are the measures of NPP for evaluating model predictions? How well do models represent regional patterns of NPP? Initial EMDI results showed general agreement between model predictions and field measurements but with obvious differences that indicated areas for potential data and model improvement. The effort was built on the development and compilation of complete and consistent databases for model initialization and comparison. Database development improves the data as well as models; however, there is a need to incorporate additional observations and model outputs (LAI, hydrology, etc.) for comprehensive analyses of biogeochemical processes and their relationships to ecosystem structure and function. EMDI initialization and NPP data sets are available from the Oak Ridge National Laboratory Distributed Active Archive Center http://www.daac.ornl.gov/. Acknowledgements: This work was partially supported by the International Geosphere-Biosphere Programme - Data and Information System (IGBP-DIS); the IGBP-Global Analysis, Interpretation and Modelling Task Force (GAIM); the National Center for Ecological Analysis and Synthesis (NCEAS); and the National Aeronautics and Space Administration (NASA) Terrestrial Ecosystem Program. Oak Ridge National Laboratory is managed by UT-Battelle LLC for the U.S. Department of Energy under contract DE-AC05-00OR22725

  1. College Explorer.

    ERIC Educational Resources Information Center

    Ahl, David H.

    1985-01-01

    The "College Explorer" is a software package (for the 64K Apple II, IBM PC, TRS-80 model III and 4 microcomputers) which aids in choosing a college. The major features of this package (manufactured by The College Board) are described and evaluated. Sample input/output is included. (JN)

  2. Power generation systems and methods

    NASA Technical Reports Server (NTRS)

    Jones, Jack A. (Inventor); Chao, Yi (Inventor)

    2011-01-01

    A power generation system includes a plurality of submerged mechanical devices. Each device includes a pump that can be powered, in operation, by mechanical energy to output a pressurized output liquid flow in a conduit. Main output conduits are connected with the device conduits to combine pressurized output flows output from the submerged mechanical devices into a lower number of pressurized flows. These flows are delivered to a location remote of the submerged mechanical devices for power generation.

  3. A Model for Optimizing the Combination of Solar Electricity Generation, Supply Curtailment, Transmission and Storage

    NASA Astrophysics Data System (ADS)

    Perez, Marc J. R.

    With extraordinary recent growth of the solar photovoltaic industry, it is paramount to address the biggest barrier to its high-penetration across global electrical grids: the inherent variability of the solar resource. This resource variability arises from largely unpredictable meteorological phenomena and from the predictable rotation of the earth around the sun and about its own axis. To achieve very high photovoltaic penetration, the imbalance between the variable supply of sunlight and demand must be alleviated. The research detailed herein consists of the development of a computational model which seeks to optimize the combination of 3 supply-side solutions to solar variability that minimizes the aggregate cost of electricity generated therefrom: Storage (where excess solar generation is stored when it exceeds demand for utilization when it does not meet demand), interconnection (where solar generation is spread across a large geographic area and electrically interconnected to smooth overall regional output) and smart curtailment (where solar capacity is oversized and excess generation is curtailed at key times to minimize the need for storage.). This model leverages a database created in the context of this doctoral work of satellite-derived photovoltaic output spanning 10 years at a daily interval for 64,000 unique geographic points across the globe. Underpinning the model's design and results, the database was used to further the understanding of solar resource variability at timescales greater than 1-day. It is shown that--as at shorter timescales--cloud/weather-induced solar variability decreases with geographic extent and that the geographic extent at which variability is mitigated increases with timescale and is modulated by the prevailing speed of clouds/weather systems. Unpredictable solar variability up to the timescale of 30 days is shown to be mitigated across a geographic extent of only 1500km if that geographic extent is oriented in a north/south bearing. Using technical and economic data reflecting today's real costs for solar generation technology, storage and electric transmission in combination with this model, we determined the minimum cost combination of these solutions to transform the variable output from solar plants into 3 distinct output profiles: A constant output equivalent to a baseload power plant, a well-defined seasonally-variable output with no weather-induced variability and a variable output but one that is 100% predictable on a multi-day ahead basis. In order to do this, over 14,000 model runs were performed by varying the desired output profile, the amount of energy curtailment, the penetration of solar energy and the geographic region across the continental United States. Despite the cost of supplementary electric transmission, geographic interconnection has the potential to reduce the levelized cost of electricity when meeting any of the studied output profiles by over 65% compared to when only storage is used. Energy curtailment, despite the cost of underutilizing solar energy capacity, has the potential to reduce the total cost of electricity when meeting any of the studied output profiles by over 75% compared to when only storage is used. The three variability mitigation strategies are thankfully not mutually exclusive. When combined at their ideal levels, each of the regions studied saw a reduction in cost of electricity of over 80% compared to when only energy storage is used to meet a specified output profile. When including current costs for solar generation, transmission and energy storage, an optimum configuration can conservatively provide guaranteed baseload power generation with solar across the entire continental United States (equivalent to a nuclear power plant with no down time) for less than 0.19 per kilowatt-hour. If solar is preferentially clustered in the southwest instead of evenly spread throughout the United States, and we adopt future expected costs for solar generation of 1 per watt, optimal model results show that meeting a 100% predictable output target with solar will cost no more than $0.08 per kilowatt-hour.

  4. Interval Predictor Models with a Formal Characterization of Uncertainty and Reliability

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Giesy, Daniel P.; Kenny, Sean P.

    2014-01-01

    This paper develops techniques for constructing empirical predictor models based on observations. By contrast to standard models, which yield a single predicted output at each value of the model's inputs, Interval Predictors Models (IPM) yield an interval into which the unobserved output is predicted to fall. The IPMs proposed prescribe the output as an interval valued function of the model's inputs, render a formal description of both the uncertainty in the model's parameters and of the spread in the predicted output. Uncertainty is prescribed as a hyper-rectangular set in the space of model's parameters. The propagation of this set through the empirical model yields a range of outputs of minimal spread containing all (or, depending on the formulation, most) of the observations. Optimization-based strategies for calculating IPMs and eliminating the effects of outliers are proposed. Outliers are identified by evaluating the extent by which they degrade the tightness of the prediction. This evaluation can be carried out while the IPM is calculated. When the data satisfies mild stochastic assumptions, and the optimization program used for calculating the IPM is convex (or, when its solution coincides with the solution to an auxiliary convex program), the model's reliability (that is, the probability that a future observation would be within the predicted range of outputs) can be bounded rigorously by a non-asymptotic formula.

  5. Comparative study of diode-pumped alkali vapor laser and exciplex-pumped alkali laser systems and selection principal of parameters

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui

    2017-03-01

    A theoretical model based on common pump structure is proposed to analyze the output characteristics of a diode-pumped alkali vapor laser (DPAL) and XPAL (exciplex-pumped alkali laser). Cs-DPAL and Cs-Ar XPAL systems are used as examples. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-level XPAL systems with broadband pumping, which is several times the pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell's length, mixed gas concentration, pumped linewidth, and output coupler are analyzed for DPAL and XPAL systems based on the kinetic model. In addition, the predictions of selection principal of temperature and cell's length are also presented. The concept of the equivalent "alkali areal density" is proposed. The result shows that the output characteristics with the same alkali areal density but different temperatures turn out to be equal for either the DPAL or the XPAL system. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented.

  6. A Water-Withdrawal Input-Output Model of the Indian Economy.

    PubMed

    Bogra, Shelly; Bakshi, Bhavik R; Mathur, Ritu

    2016-02-02

    Managing freshwater allocation for a highly populated and growing economy like India can benefit from knowledge about the effect of economic activities. This study transforms the 2003-2004 economic input-output (IO) table of India into a water withdrawal input-output model to quantify direct and indirect flows. This unique model is based on a comprehensive database compiled from diverse public sources, and estimates direct and indirect water withdrawal of all economic sectors. It distinguishes between green (rainfall), blue (surface and ground), and scarce groundwater. Results indicate that the total direct water withdrawal is nearly 3052 billion cubic meter (BCM) and 96% of this is used in agriculture sectors with the contribution of direct green water being about 1145 BCM, excluding forestry. Apart from 727 BCM direct blue water withdrawal for agricultural, other significant users include "Electricity" with 64 BCM, "Water supply" with 44 BCM and other industrial sectors with nearly 14 BCM. "Construction", "miscellaneous food products"; "Hotels and restaurants"; "Paper, paper products, and newsprint" are other significant indirect withdrawers. The net virtual water import is found to be insignificant compared to direct water used in agriculture nationally, while scarce ground water associated with crops is largely contributed by northern states.

  7. Identification of linear system models and state estimators for controls

    NASA Technical Reports Server (NTRS)

    Chen, Chung-Wen

    1992-01-01

    The following paper is presented in viewgraph format and covers topics including: (1) linear state feedback control system; (2) Kalman filter state estimation; (3) relation between residual and stochastic part of output; (4) obtaining Kalman filter gain; (5) state estimation under unknown system model and unknown noises; and (6) relationship between filter Markov parameters and system Markov parameters.

  8. Section 3. The SPARROW Surface Water-Quality Model: Theory, Application and User Documentation

    USGS Publications Warehouse

    Schwarz, G.E.; Hoos, A.B.; Alexander, R.B.; Smith, R.A.

    2006-01-01

    SPARROW (SPAtially Referenced Regressions On Watershed attributes) is a watershed modeling technique for relating water-quality measurements made at a network of monitoring stations to attributes of the watersheds containing the stations. The core of the model consists of a nonlinear regression equation describing the non-conservative transport of contaminants from point and diffuse sources on land to rivers and through the stream and river network. The model predicts contaminant flux, concentration, and yield in streams and has been used to evaluate alternative hypotheses about the important contaminant sources and watershed properties that control transport over large spatial scales. This report provides documentation for the SPARROW modeling technique and computer software to guide users in constructing and applying basic SPARROW models. The documentation gives details of the SPARROW software, including the input data and installation requirements, and guidance in the specification, calibration, and application of basic SPARROW models, as well as descriptions of the model output and its interpretation. The documentation is intended for both researchers and water-resource managers with interest in using the results of existing models and developing and applying new SPARROW models. The documentation of the model is presented in two parts. Part 1 provides a theoretical and practical introduction to SPARROW modeling techniques, which includes a discussion of the objectives, conceptual attributes, and model infrastructure of SPARROW. Part 1 also includes background on the commonly used model specifications and the methods for estimating and evaluating parameters, evaluating model fit, and generating water-quality predictions and measures of uncertainty. Part 2 provides a user's guide to SPARROW, which includes a discussion of the software architecture and details of the model input requirements and output files, graphs, and maps. The text documentation and computer software are available on the Web at http://usgs.er.gov/sparrow/sparrow-mod/.

  9. A new modelling and identification scheme for time-delay systems with experimental investigation: a relay feedback approach

    NASA Astrophysics Data System (ADS)

    Pandey, Saurabh; Majhi, Somanath; Ghorai, Prasenjit

    2017-07-01

    In this paper, the conventional relay feedback test has been modified for modelling and identification of a class of real-time dynamical systems in terms of linear transfer function models with time-delay. An ideal relay and unknown systems are connected through a negative feedback loop to bring the sustained oscillatory output around the non-zero setpoint. Thereafter, the obtained limit cycle information is substituted in the derived mathematical equations for accurate identification of unknown plants in terms of overdamped, underdamped, critically damped second-order plus dead time and stable first-order plus dead time transfer function models. Typical examples from the literature are included for the validation of the proposed identification scheme through computer simulations. Subsequently, the comparisons between estimated model and true system are drawn through integral absolute error criterion and frequency response plots. Finally, the obtained output responses through simulations are verified experimentally on real-time liquid level control system using Yokogawa Distributed Control System CENTUM CS3000 set up.

  10. Development of a Distributed Parallel Computing Framework to Facilitate Regional/Global Gridded Crop Modeling with Various Scenarios

    NASA Astrophysics Data System (ADS)

    Jang, W.; Engda, T. A.; Neff, J. C.; Herrick, J.

    2017-12-01

    Many crop models are increasingly used to evaluate crop yields at regional and global scales. However, implementation of these models across large areas using fine-scale grids is limited by computational time requirements. In order to facilitate global gridded crop modeling with various scenarios (i.e., different crop, management schedule, fertilizer, and irrigation) using the Environmental Policy Integrated Climate (EPIC) model, we developed a distributed parallel computing framework in Python. Our local desktop with 14 cores (28 threads) was used to test the distributed parallel computing framework in Iringa, Tanzania which has 406,839 grid cells. High-resolution soil data, SoilGrids (250 x 250 m), and climate data, AgMERRA (0.25 x 0.25 deg) were also used as input data for the gridded EPIC model. The framework includes a master file for parallel computing, input database, input data formatters, EPIC model execution, and output analyzers. Through the master file for parallel computing, the user-defined number of threads of CPU divides the EPIC simulation into jobs. Then, Using EPIC input data formatters, the raw database is formatted for EPIC input data and the formatted data moves into EPIC simulation jobs. Then, 28 EPIC jobs run simultaneously and only interesting results files are parsed and moved into output analyzers. We applied various scenarios with seven different slopes and twenty-four fertilizer ranges. Parallelized input generators create different scenarios as a list for distributed parallel computing. After all simulations are completed, parallelized output analyzers are used to analyze all outputs according to the different scenarios. This saves significant computing time and resources, making it possible to conduct gridded modeling at regional to global scales with high-resolution data. For example, serial processing for the Iringa test case would require 113 hours, while using the framework developed in this study requires only approximately 6 hours, a nearly 95% reduction in computing time.

  11. TKKMOD: A computer simulation program for an integrated wind diesel system. Version 1.0: Document and user guide

    NASA Astrophysics Data System (ADS)

    Manninen, L. M.

    1993-12-01

    The document describes TKKMOD, a simulation model developed at Helsinki University of Technology for a specific wind-diesel system layout, with special emphasis on the battery submodel and its use in simulation. The model has been included into the European wind-diesel modeling software package WDLTOOLS under the CEC JOULE project 'Engineering Design Tools for Wind-Diesel Systems' (JOUR-0078). WDLTOOLS serves as the user interface and processes the input and output data of different logistic simulation models developed by the project participants. TKKMOD cannot be run without this shell. The report only describes the simulation principles and model specific parameters of TKKMOD and gives model specific user instructions. The input and output data processing performed outside this model is described in the documentation of the shell. The simulation model is utilized for calculation of long-term performance of the reference system configuration for given wind and load conditions. The main results are energy flows, losses in the system components, diesel fuel consumption, and the number of diesel engine starts.

  12. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan W.

    2014-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  13. A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data

    NASA Technical Reports Server (NTRS)

    Simon, Donald L.; Rinehart, Aidan Walker

    2015-01-01

    This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.

  14. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    PubMed

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  15. Air Force Global Weather Central System Architecture Study. Final System/Subsystem Summary Report. Volume 4. Systems Analysis and Trade Studies

    DTIC Science & Technology

    1976-03-01

    atmosphere,as well as very fine grid cloud models and cloud probability models. Some of the new requirements that will be supported with this system are a...including the Advanced Prediction Model for the global atmosphere, as well as very fine grid cloud models and cloud proba- bility models. Some of the new...with the mapping and gridding function (imput and output)? Should the capability exist to interface raw ungridded data with the SID interface

  16. Predicting High-Power Performance in Professional Cyclists.

    PubMed

    Sanders, Dajo; Heijboer, Mathieu; Akubat, Ibrahim; Meijer, Kenneth; Hesselink, Matthijs K

    2017-03-01

    To assess if short-duration (5 to ~300 s) high-power performance can accurately be predicted using the anaerobic power reserve (APR) model in professional cyclists. Data from 4 professional cyclists from a World Tour cycling team were used. Using the maximal aerobic power, sprint peak power output, and an exponential constant describing the decrement in power over time, a power-duration relationship was established for each participant. To test the predictive accuracy of the model, several all-out field trials of different durations were performed by each cyclist. The power output achieved during the all-out trials was compared with the predicted power output by the APR model. The power output predicted by the model showed very large to nearly perfect correlations to the actual power output obtained during the all-out trials for each cyclist (r = .88 ± .21, .92 ± .17, .95 ± .13, and .97 ± .09). Power output during the all-out trials remained within an average of 6.6% (53 W) of the predicted power output by the model. This preliminary pilot study presents 4 case studies on the applicability of the APR model in professional cyclists using a field-based approach. The decrement in all-out performance during high-intensity exercise seems to conform to a general relationship with a single exponential-decay model describing the decrement in power vs increasing duration. These results are in line with previous studies using the APR model to predict performance during brief all-out trials. Future research should evaluate the APR model with a larger sample size of elite cyclists.

  17. A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.

    2009-01-01

    This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.

  18. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    USGS Publications Warehouse

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust decisions.

  19. Advanced chemical oxygen iodine lasers for novel beam generation

    NASA Astrophysics Data System (ADS)

    Wu, Kenan; Zhao, Tianliang; Huai, Ying; Jin, Yuqi

    2018-03-01

    Chemical oxygen iodine laser, or COIL, is an impressive type of chemical laser that emits high power beam with good atmospheric transmissivity. Chemical oxygen iodine lasers with continuous-wave plane wave output are well-developed and are widely adopted in directed energy systems in the past several decades. Approaches of generating novel output beam based on chemical oxygen iodine lasers are explored in the current study. Since sophisticated physical processes including supersonic flowing of gaseous active media, chemical reacting of various species, optical power amplification, as well as thermal deformation and vibration of mirrors take place in the operation of COIL, a multi-disciplinary model is developed for tracing the interacting mechanisms and evaluating the performance of the proposed laser architectures. Pulsed output mode with repetition rate as high as hundreds of kHz, pulsed output mode with low repetition rate and high pulse energy, as well as novel beam with vector or vortex feature can be obtained. The results suggest potential approaches for expanding the applicability of chemical oxygen iodine lasers.

  20. Input-output characterization of an ultrasonic testing system by digital signal analysis

    NASA Technical Reports Server (NTRS)

    Karaguelle, H.; Lee, S. S.; Williams, J., Jr.

    1984-01-01

    The input/output characteristics of an ultrasonic testing system used for stress wave factor measurements were studied. The fundamentals of digital signal processing are summarized. The inputs and outputs are digitized and processed in a microcomputer using digital signal processing techniques. The entire ultrasonic test system, including transducers and all electronic components, is modeled as a discrete-time linear shift-invariant system. Then the impulse response and frequency response of the continuous time ultrasonic test system are estimated by interpolating the defining points in the unit sample response and frequency response of the discrete time system. It is found that the ultrasonic test system behaves as a linear phase bandpass filter. Good results were obtained for rectangular pulse inputs of various amplitudes and durations and for tone burst inputs whose center frequencies are within the passband of the test system and for single cycle inputs of various amplitudes. The input/output limits on the linearity of the system are determined.

  1. Analysis of current density and specific absorption rate in biological tissue surrounding transcutaneous transformer for an artificial heart.

    PubMed

    Shiba, Kenji; Nukaya, Masayuki; Tsuji, Toshio; Koshiji, Kohji

    2008-01-01

    This paper reports on the current density and specific absorption rate (SAR) analysis of biological tissue surrounding an air-core transcutaneous transformer for an artificial heart. The electromagnetic field in the biological tissue is analyzed by the transmission line modeling method, and the current density and SAR as a function of frequency, output voltage, output power, and coil dimension are calculated. The biological tissue of the model has three layers including the skin, fat, and muscle. The results of simulation analysis show SARs to be very small at any given transmission conditions, about 2-14 mW/kg, compared to the basic restrictions of the International Commission on nonionizing radiation protection (ICNIRP; 2 W/kg), while the current density divided by the ICNIRP's basic restrictions gets smaller as the frequency rises and the output voltage falls. It is possible to transfer energy below the ICNIRP's basic restrictions when the frequency is over 250 kHz and the output voltage is under 24 V. Also, the parts of the biological tissue that maximized the current density differ by frequencies; in the low frequency is muscle and in the high frequency is skin. The boundary is in the vicinity of the frequency 600-1000 kHz.

  2. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  3. Method and system for SCR optimization

    DOEpatents

    Lefebvre, Wesley Curt [Boston, MA; Kohn, Daniel W [Cambridge, MA

    2009-03-10

    Methods and systems are provided for controlling SCR performance in a boiler. The boiler includes one or more generally cross sectional areas. Each cross sectional area can be characterized by one or more profiles of one or more conditions affecting SCR performance and be associated with one or more adjustable desired profiles of the one or more conditions during the operation of the boiler. The performance of the boiler can be characterized by boiler performance parameters. A system in accordance with one or more embodiments of the invention can include a controller input for receiving a performance goal for the boiler corresponding to at least one of the boiler performance parameters and for receiving data values corresponding to boiler control variables and to the boiler performance parameters. The boiler control variables include one or more current profiles of the one or more conditions. The system also includes a system model that relates one or more profiles of the one or more conditions in the boiler to the boiler performance parameters. The system also includes an indirect controller that determines one or more desired profiles of the one or more conditions to satisfy the performance goal for the boiler. The indirect controller uses the system model, the received data values and the received performance goal to determine the one or more desired profiles of the one or more conditions. The system model also includes a controller output that outputs the one or more desired profiles of the one or more conditions.

  4. Response of a piezoelectric pressure transducer to IR laser beam impingement

    NASA Technical Reports Server (NTRS)

    Smith, William C.; Leiweke, Robert J.; Beeson, Harold

    1992-01-01

    The non-pressure response of a PCB Model 113A transducer to a far infrared radiation impulse from a carbon dioxide laser was investigated. Incident radiation was applied both to the bare transducer diaphragm and to coated diaphragms. Coatings included two common ablative materials and a reflective gold coating. High-flux radiation impulses induced an immediate brief negative output followed by a longer-duration positive output. Both timing and amplitude of the responses will be discussed, and the effects of coatings will be compared. Bursts of blackbody radiation from a 1500 K source produced qualitatively similar responses.

  5. A Solar-luminosity Model and Climate

    NASA Technical Reports Server (NTRS)

    Perry, Charles A.

    1990-01-01

    Although the mechanisms of climatic change are not completely understood, the potential causes include changes in the Sun's luminosity. Solar activity in the form of sunspots, flares, proton events, and radiation fluctuations has displayed periodic tendencies. Two types of proxy climatic data that can be related to periodic solar activity are varved geologic formations and freshwater diatom deposits. A model for solar luminosity was developed by using the geometric progression of harmonic cycles that is evident in solar and geophysical data. The model assumes that variation in global energy input is a result of many periods of individual solar-luminosity variations. The 0.1-percent variation of the solar constant measured during the last sunspot cycle provided the basis for determining the amplitude of each luminosity cycle. Model output is a summation of the amplitudes of each cycle of a geometric progression of harmonic sine waves that are referenced to the 11-year average solar cycle. When the last eight cycles in Emiliani's oxygen-18 variations from deep-sea cores were standardized to the average length of glaciations during the Pleistocene (88,000 years), correlation coefficients with the model output ranged from 0.48 to 0.76. In order to calibrate the model to real time, model output was graphically compared to indirect records of glacial advances and retreats during the last 24,000 years and with sea-level rises during the Holocene. Carbon-14 production during the last millenium and elevations of the Great Salt Lake for the last 140 years demonstrate significant correlations with modeled luminosity. Major solar flares during the last 90 years match well with the time-calibrated model.

  6. Graphical Modeling of Shipboard Electric Power Distribution Systems

    DTIC Science & Technology

    1993-12-01

    examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and the...frequency and voltage output of the machine. This load is then connected to the synchronous generator and two different scenarios are examined including a...examined. A means of modeling a load for a synchronous generator is then shown which accurately interrelates the loading of the generator and tht

  7. Information Processing and Collective Behavior in a Model Neuronal System

    DTIC Science & Technology

    2014-03-28

    for an AFOSR project headed by Steve Reppert on Monarch Butterfly navigation. We visited the Reppert lab at the UMASS Medical School and have had many...developed a detailed mathematical model of the mammalian circadian clock. Our model can accurately predict diverse experimental data including the...i.e. P1 affects P2 which affects P3 …). The output of the system is calculated (measurements), and the interactions are forgotten. Based on

  8. Public–nonprofit partnership performance in a disaster context: the case of Haiti.

    PubMed

    Nolte, Isabella M; Boenigk, Silke

    2011-01-01

    During disasters, partnerships between public and nonprofit organizations are vital to provide fast relief to affected communities. In this article, we develop a process model to support a performance evaluation of such intersectoral partnerships. The model includes input factors, organizational structures, outputs and the long-term outcomes of public–nonprofit partnerships. These factors derive from theory and a systematic literature review of emergency, public, nonprofit, and network research. To adapt the model to a disaster context, we conducted a case study that examines public and nonprofit organizations that partnered during the 2010 Haiti earthquake. The case study results show that communication, trust, and experience are the most important partnership inputs; the most prevalent governance structure of public–nonprofit partnerships is a lead organization network. Time and quality measures should be considered to assess partnership outputs, and community, network, and organizational actor perspectives must be taken into account when evaluating partnership outcomes.

  9. A computer program to trace seismic ray distribution in complex two-dimensional geological models

    USGS Publications Warehouse

    Yacoub, Nazieh K.; Scott, James H.

    1970-01-01

    A computer program has been developed to trace seismic rays and their amplitudes and energies through complex two-dimensional geological models, for which boundaries between elastic units are defined by a series of digitized X-, Y-coordinate values. Input data for the program includes problem identification, control parameters, model coordinates and elastic parameter for the elastic units. The program evaluates the partitioning of ray amplitude and energy at elastic boundaries, computes the total travel time, total travel distance and other parameters for rays arising at the earth's surface. Instructions are given for punching program control cards and data cards, and for arranging input card decks. An example of printer output for a simple problem is presented. The program is written in FORTRAN IV language. The listing of the program is shown in the Appendix, with an example output from a CDC-6600 computer.

  10. Multi-model blending

    DOEpatents

    Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran

    2016-10-18

    A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.

  11. Mars Global Reference Atmospheric Model 2010 Version: Users Guide

    NASA Technical Reports Server (NTRS)

    Justh, H. L.

    2014-01-01

    This Technical Memorandum (TM) presents the Mars Global Reference Atmospheric Model 2010 (Mars-GRAM 2010) and its new features. Mars-GRAM is an engineering-level atmospheric model widely used for diverse mission applications. Applications include systems design, performance analysis, and operations planning for aerobraking, entry, descent and landing, and aerocapture. Additionally, this TM includes instructions on obtaining the Mars-GRAM source code and data files as well as running Mars-GRAM. It also contains sample Mars-GRAM input and output files and an example of how to incorporate Mars-GRAM as an atmospheric subroutine in a trajectory code.

  12. Dual side control for inductive power transfer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Hunter; Sealy, Kylee; Gilchrist, Aaron

    An apparatus for dual side control includes a measurement module that measures a voltage and a current of an IPT system. The voltage includes an output voltage and/or an input voltage and the current includes an output current and/or an input current. The output voltage and the output current are measured at an output of the IPT system and the input voltage and the input current measured at an input of the IPT system. The apparatus includes a max efficiency module that determines a maximum efficiency for the IPT system. The max efficiency module uses parameters of the IPT systemmore » to iterate to a maximum efficiency. The apparatus includes an adjustment module that adjusts one or more parameters in the IPT system consistent with the maximum efficiency calculated by the max efficiency module.« less

  13. How to optimize tuberculosis case finding: explorations for Indonesia with a health system model

    PubMed Central

    2009-01-01

    Background A mathematical model was designed to explore the impact of three strategies for better tuberculosis case finding. Strategies included: (1) reducing the number of tuberculosis patients who do not seek care; (2) reducing diagnostic delay; and (3) engaging non-DOTS providers in the referral of tuberculosis suspects to DOTS services in the Indonesian health system context. The impact of these strategies on tuberculosis mortality and treatment outcome was estimated using a mathematical model of the Indonesian health system. Methods The model consists of multiple compartments representing logical movement of a respiratory symptomatic (tuberculosis suspect) through the health system, including patient- and health system delays. Main outputs of the model are tuberculosis death rate and treatment outcome (i.e. full or partial cure). We quantified the model parameters for the Jogjakarta province context, using a two round Delphi survey with five Indonesian tuberculosis experts. Results The model validation shows that four critical model outputs (average duration of symptom onset to treatment, detection rate, cure rate, and death rate) were reasonably close to existing available data, erring towards more optimistic outcomes than are actually reported. The model predicted that an intervention to reduce the proportion of tuberculosis patients who never seek care would have the biggest impact on tuberculosis death prevention, while an intervention resulting in more referrals of tuberculosis suspects to DOTS facilities would yield higher cure rates. This finding is similar for situations where the alternative sector is a more important health resource, such as in most other parts of Indonesia. Conclusion We used mathematical modeling to explore the impact of Indonesian health system interventions on tuberculosis treatment outcome and deaths. Because detailed data were not available regarding the current Indonesian population, we relied on expert opinion to quantify the parameters. The fact that the model output showed similar results to epidemiological data suggests that the experts had an accurate understanding of this subject, thereby reassuring the quality of our predictions. The model highlighted the potential effectiveness of active case finding of tuberculosis patients with limited access to DOTS facilities in the developing country setting. PMID:19505296

  14. Application of a wind-wave-current coupled model in the Catalan coast (NW Mediterranean sea), for wind energy purposes

    NASA Astrophysics Data System (ADS)

    María Palomares, Ana; Navarro, Jorge; Grifoll, Manel; Pallares, Elena; Espino, Manuel

    2016-04-01

    This work shows the main results of the HAREAMAR project (including HAREMAR, ENE2012-38772-C02-01 and DARDO, ENE2012-38772-C02-02 projects), concerning the local Wind, Wave and Current simulation at St. Jordi Bay (NW Mediterranean Sea). Offshore Wind Energy has become one of the main topics within the research in Wind Energy research. Although there are quite a few models with a high level of reliability for wind simulation and prediction in onshore places, the wind prediction needs further investigations for adaptation to the Offshore emplacements, taking into account the interaction atmosphere-ocean. The main problem in these ocean areas is the lack of wind data, which neither allows for characterizing the energy potential and wind behaviour in a particular place, nor validating the forecasting models. The main objective of this work is to reduce the local prediction errors, in order to make the meteo-oceanographic hindcast and forecast more reliable. The COAWST model (Coupled-Ocean-Atmosphere-Wave Sediment Transport Model; Warner et al., 2010) system has been implemented in the region considering a set of downscaling nested meshes to obtain high-resolution outputs in the region. The adaptation to this particular area, combining the different wind, wave and ocean model domains has been far from simple, because the grid domains for the three models differ significantly. This work shows the main results of the COAWST model implementation to this particular area, including both monthly and other set of tests in different atmospheric situations, especially chosen for their particular interest. The time period considered for the validation is the whole year 2012. A comparative study between the WRF, SWAN and ROMS model outputs (without coupling), the COWAST model outputs, and a buoy measurements moored in the region was performed for this year. References Warner, J.C., Armstrong, B., He, R., and Zambon, J.B., 2010, Development of a Coupled Ocean-Atmosphere-Wave-Sediment Transport (COAWST) modeling system: Ocean Modeling, 35 (3), 230-244.

  15. Biospheric Monitoring and Ecological Forecasting using EOS/MODIS data, ecosystem modeling, planning and scheduling technologies

    NASA Astrophysics Data System (ADS)

    Nemani, R. R.; Votava, P.; Golden, K.; Hashimoto, H.; Jolly, M.; White, M.; Running, S.; Coughlan, J.

    2003-12-01

    The latest generation of NASA Earth Observing System satellites has brought a new dimension to continuous monitoring of the living part of the Earth System, the Biosphere. EOS data can now provide weekly global measures of vegetation productivity and ocean chlorophyll, and many related biophysical factors such as land cover changes or snowmelt rates. However, information with the highest economic value would be forecasting impending conditions of the biosphere that would allow advanced decision-making to mitigate dangers, or exploit positive trends. We have developed a software system called the Terrestrial Observation and Prediction System (TOPS) to facilitate rapid analysis of ecosystem states/functions by integrating EOS data with ecosystem models, surface weather observations and weather/climate forecasts. Land products from MODIS (Moderate Resolution Imaging Spectroradiometer) including land cover, albedo, snow, surface temperature, leaf area index are ingested into TOPS for parameterization of models and for verifying model outputs such as snow cover and vegetation phenology. TOPS is programmed to gather data from observing networks such as USDA soil moisture, AMERIFLUX, SNOWTEL to further enhance model predictions. Key technologies enabling TOPS implementation include the ability to understand and process heterogeneous-distributed data sets, automated planning and execution of ecosystem models, causation analysis for understanding model outputs. Current TOPS implementations at local (vineyard) to global scales (global net primary production) can be found at http://www.ntsg.umt.edu/tops.

  16. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    NASA Astrophysics Data System (ADS)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  17. Modelled vs. reconstructed past fire dynamics - how can we compare?

    NASA Astrophysics Data System (ADS)

    Brücher, Tim; Brovkin, Victor; Kloster, Silvia; Marlon, Jennifer R.; Power, Mitch J.

    2015-04-01

    Fire is an important process that affects climate through changes in CO2 emissions, albedo, and aerosols (Ward et al. 2012). Fire-history reconstructions from charcoal accumulations in sediment indicate that biomass burning has increased since the Last Glacial Maximum (Power et al. 2008; Marlon et al. 2013). Recent comparisons with transient climate model output suggest that this increase in global ?re activity is linked primarily to variations in temperature and secondarily to variations in precipitation (Daniau et al. 2012). In this study, we discuss the best way to compare global ?re model output with charcoal records. Fire models generate quantitative output for burned area and fire-related emissions of CO2, whereas charcoal data indicate relative changes in biomass burning for specific regions and time periods only. However, models can be used to relate trends in charcoal data to trends in quantitative changes in burned area or fire carbon emissions. Charcoal records are often reported as Z-scores (Power et al. 2008). Since Z-scores are non-linear power transformations of charcoal influxes, we must evaluate if, for example, a two-fold increase in the standardized charcoal reconstruction corresponds to a 2- or 200-fold increase in the area burned. In our study we apply the Z-score metric to the model output. This allows us to test how well the model can quantitatively reproduce the charcoal-based reconstructions and how Z-score metrics affect the statistics of model output. The Global Charcoal Database (GCD version 2.5; www.gpwg.org/gpwgdb.html) is used to determine regional and global paleofire trends from 218 sedimentary charcoal records covering part or all of the last 8 ka BP. To retrieve regional and global composites of changes in fire activity over the Holocene the time series of Z-scores are linearly averaged to achieve regional composites. A coupled climate-carbon cycle model, CLIMBA (Brücher et al. 2014), is used for this study. It consists of the CLIMBER-2 Earth system model of intermediate complexity and the JSBACH land component of the Max Planck Institute Earth System Model. The fire algorithm in JSBACH assumes a constant annual lightning cycle as the sole fire ignition mechanism (Arora and Boer 2005). To eliminate data processing differences as a source for potential discrepancies, the processing of both reconstructed and modeled data, including e.g. normalisation with respect to a given base period and aggregation of time series was done in exactly the same way. Here, we compare the aggregated time series on a hemispheric and regional scale.

  18. Design of vaccination and fumigation on Host-Vector Model by input-output linearization method

    NASA Astrophysics Data System (ADS)

    Nugraha, Edwin Setiawan; Naiborhu, Janson; Nuraini, Nuning

    2017-03-01

    Here, we analyze the Host-Vector Model and proposed design of vaccination and fumigation to control infectious population by using feedback control especially input-output liniearization method. Host population is divided into three compartments: susceptible, infectious and recovery. Whereas the vector population is divided into two compartment such as susceptible and infectious. In this system, vaccination and fumigation treat as input factors and infectious population as output result. The objective of design is to stabilize of the output asymptotically tend to zero. We also present the examples to illustrate the design model.

  19. NASA AVOSS Fast-Time Wake Prediction Models: User's Guide

    NASA Technical Reports Server (NTRS)

    Ahmad, Nash'at N.; VanValkenburg, Randal L.; Pruis, Matthew

    2014-01-01

    The National Aeronautics and Space Administration (NASA) is developing and testing fast-time wake transport and decay models to safely enhance the capacity of the National Airspace System (NAS). The fast-time wake models are empirical algorithms used for real-time predictions of wake transport and decay based on aircraft parameters and ambient weather conditions. The aircraft dependent parameters include the initial vortex descent velocity and the vortex pair separation distance. The atmospheric initial conditions include vertical profiles of temperature or potential temperature, eddy dissipation rate, and crosswind. The current distribution includes the latest versions of the APA (3.4) and the TDP (2.1) models. This User's Guide provides detailed information on the model inputs, file formats, and the model output. An example of a model run and a brief description of the Memphis 1995 Wake Vortex Dataset is also provided.

  20. Energy modeling. Volume 2: Inventory and details of state energy models

    NASA Astrophysics Data System (ADS)

    Melcher, A. G.; Underwood, R. G.; Weber, J. C.; Gist, R. L.; Holman, R. P.; Donald, D. W.

    1981-05-01

    An inventory of energy models developed by or for state governments is presented, and certain models are discussed in depth. These models address a variety of purposes such as: supply or demand of energy or of certain types of energy; emergency management of energy; and energy economics. Ten models are described. The purpose, use, and history of the model is discussed, and information is given on the outputs, inputs, and mathematical structure of the model. The models include five models dealing with energy demand, one of which is econometric and four of which are econometric-engineering end-use models.

  1. Energy accounting of River Severn tidal power schemes

    NASA Astrophysics Data System (ADS)

    Roberts, F.

    1982-07-01

    Energy accounting comparisons are constructed in order to make an economic analysis of three different tidal generating schemes for the Severn River in Britain. The plans included ebb generation, flood generation, and turbine-sluice configurations, and the analysis comprised totaling the energy needed to complete the construction in relation to the projected output. Necessary construction components numbered caissons, shipping locks, embankments, transmission facilities, and turbines, with inputs limited to 1.75%/yr once the installations are completed. The total outputs for the installations were modeled as 12, 18, and 18 TWh/yr, respectively, with a projected lifetime of 120 yr. The least output/input ratio was found to be 10:1, with a highest possible value of 16:1. The energy return is highest with the smallest installation, a factor which is offset by the increased return with larger capacity.

  2. Extension of the input-output relation for a Michelson interferometer to arbitrary coherent-state light sources: Gravitational-wave detector and weak-value amplification

    NASA Astrophysics Data System (ADS)

    Nakamura, Kouji; Fujimoto, Masa-Katsu

    2018-05-01

    An extension of the input-output relation for a conventional Michelson interferometric gravitational-wave detector is carried out to treat an arbitrary coherent state for the injected optical beam. This extension is one of necessary researches toward the clarification of the relation between conventional gravitational-wave detectors and a simple model of a gravitational-wave detector inspired by weak-measurements in Nishizawa (2015). The derived input-output relation describes not only a conventional Michelson-interferometric gravitational-wave detector but also the situation of weak measurements. As a result, we may say that a conventional Michelson gravitational-wave detector already includes the essence of the weak-value amplification as the reduction of the quantum noise from the light source through the measurement at the dark port.

  3. Flow regime, temperature, and biotic interactions drive differential declines of trout species under climate change [includes Supporting Information

    Treesearch

    Seth J. Wenger; Daniel J. Isaak; Charlie Luce; Helen M. Neville; Kurt D. Fausch; Jason B. Dunham; Daniel C. Dauwalter; Michael K. Young; Marketa M. Elsner; Bruce E. Rieman; Alan F. Hamlet; Jack E. Williams

    2011-01-01

    Broad-scale studies of climate change effects on freshwater species have focused mainly on temperature, ignoring critical drivers such as flow regime and biotic interactions. We use downscaled outputs from general circulation models coupled with a hydrologic model to forecast the effects of altered flows and increased temperatures on four interacting species of trout...

  4. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-03

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Finite element modelling of non-linear magnetic circuits using Cosmic NASTRAN

    NASA Technical Reports Server (NTRS)

    Sheerer, T. J.

    1986-01-01

    The general purpose Finite Element Program COSMIC NASTRAN currently has the ability to model magnetic circuits with constant permeablilities. An approach was developed which, through small modifications to the program, allows modelling of non-linear magnetic devices including soft magnetic materials, permanent magnets and coils. Use of the NASTRAN code resulted in output which can be used for subsequent mechanical analysis using a variation of the same computer model. Test problems were found to produce theoretically verifiable results.

  6. Modeling a Common-Source Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.

    2010-01-01

    This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.

  7. A Comparative Study of the Proposed Models for the Components of the National Health Information System

    PubMed Central

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-01-01

    Introduction: National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system – for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. Methods: This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. Results: The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini’s 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the “process” section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. Conclusion: the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output. PMID:24825937

  8. A comparative study of the proposed models for the components of the national health information system.

    PubMed

    Ahmadi, Maryam; Damanabi, Shahla; Sadoughi, Farahnaz

    2014-04-01

    National Health Information System plays an important role in ensuring timely and reliable access to Health information, which is essential for strategic and operational decisions that improve health, quality and effectiveness of health care. In other words, using the National Health information system you can improve the quality of health data, information and knowledge used to support decision making at all levels and areas of the health sector. Since full identification of the components of this system - for better planning and management influential factors of performanceseems necessary, therefore, in this study different attitudes towards components of this system are explored comparatively. This is a descriptive and comparative kind of study. The society includes printed and electronic documents containing components of the national health information system in three parts: input, process and output. In this context, search for information using library resources and internet search were conducted, and data analysis was expressed using comparative tables and qualitative data. The findings showed that there are three different perspectives presenting the components of national health information system Lippeveld and Sauerborn and Bodart model in 2000, Health Metrics Network (HMN) model from World Health Organization in 2008, and Gattini's 2009 model. All three models outlined above in the input (resources and structure) require components of management and leadership, planning and design programs, supply of staff, software and hardware facilities and equipment. Plus, in the "process" section from three models, we pointed up the actions ensuring the quality of health information system, and in output section, except for Lippeveld Model, two other models consider information products and use and distribution of information as components of the national health information system. the results showed that all the three models have had a brief discussion about the components of health information in input section. But Lippeveld model has overlooked the components of national health information in process and output sections. Therefore, it seems that the health measurement model of network has a comprehensive presentation for the components of health system in all three sections-input, process and output.

  9. Computer code for preliminary sizing analysis of axial-flow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, Arthur J.

    1992-01-01

    This mean diameter flow analysis uses a stage average velocity diagram as the basis for the computational efficiency. Input design requirements include power or pressure ratio, flow rate, temperature, pressure, and rotative speed. Turbine designs are generated for any specified number of stages and for any of three types of velocity diagrams (symmetrical, zero exit swirl, or impulse) or for any specified stage swirl split. Exit turning vanes can be included in the design. The program output includes inlet and exit annulus dimensions, exit temperature and pressure, total and static efficiencies, flow angles, and last stage absolute and relative Mach numbers. An analysis is presented along with a description of the computer program input and output with sample cases. The analysis and code presented herein are modifications of those described in NASA-TN-D-6702. These modifications improve modeling rigor and extend code applicability.

  10. ENERGY, WATER, AND LAND USE: A FRAMEWORK FOR INCORPORATING SCIENCE INTO SUSTAINABLE REGIONAL PLANNING

    EPA Science Inventory

    Project outputs will include: 1) the sustainability network and associated web pages; 2) sustainability indicators and associated maps representing the current values of the metrics; 3) an integrated assessment model of the impacts of electricity generation alternatives on a ...

  11. pyres: a Python wrapper for electrical resistivity modeling with R2

    NASA Astrophysics Data System (ADS)

    Befus, Kevin M.

    2018-04-01

    A Python package, pyres, was written to handle common as well as specialized input and output tasks for the R2 electrical resistivity (ER) modeling program. Input steps including handling field data, creating quadrilateral or triangular meshes, and data filtering allow repeatable and flexible ER modeling within a programming environment. pyres includes non-trivial routines and functions for locating and constraining specific known or separately-parameterized regions in both quadrilateral and triangular meshes. Three basic examples of how to run forward and inverse models with pyres are provided. The importance of testing mesh convergence and model sensitivity are also addressed with higher-level examples that show how pyres can facilitate future research-grade ER analyses.

  12. CRAC2 model description

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, L.T.; Alpert, D.J.; Burke, R.P.

    1984-03-01

    The CRAC2 computer code is a revised version of CRAC (Calculation of Reactor Accident Consequences) which was developed for the Reactor Safety Study. This document provides an overview of the CRAC2 code and a description of each of the models used. Significant improvements incorporated into CRAC2 include an improved weather sequence sampling technique, a new evacuation model, and new output capabilities. In addition, refinements have been made to the atmospheric transport and deposition model. Details of the modeling differences between CRAC2 and CRAC are emphasized in the model descriptions.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  14. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  15. Economic analysis of electronic waste recycling: modeling the cost and revenue of a materials recovery facility in California.

    PubMed

    Kang, Hai-Yong; Schoenung, Julie M

    2006-03-01

    The objectives of this study are to identify the various techniques used for treating electronic waste (e-waste) at material recovery facilities (MRFs) in the state of California and to investigate the costs and revenue drivers for these techniques. The economics of a representative e-waste MRF are evaluated by using technical cost modeling (TCM). MRFs are a critical element in the infrastructure being developed within the e-waste recycling industry. At an MRF, collected e-waste can become marketable output products including resalable systems/components and recyclable materials such as plastics, metals, and glass. TCM has two main constituents, inputs and outputs. Inputs are process-related and economic variables, which are directly specified in each model. Inputs can be divided into two parts: inputs for cost estimation and for revenue estimation. Outputs are the results of modeling and consist of costs and revenues, distributed by unit operation, cost element, and revenue source. The results of the present analysis indicate that the largest cost driver for the operation of the defined California e-waste MRF is the materials cost (37% of total cost), which includes the cost to outsource the recycling of the cathode ray tubes (CRTs) (dollar 0.33/kg); the second largest cost driver is labor cost (28% of total cost without accounting for overhead). The other cost drivers are transportation, building, and equipment costs. The most costly unit operation is cathode ray tube glass recycling, and the next are sorting, collecting, and dismantling. The largest revenue source is the fee charged to the customer; metal recovery is the second largest revenue source.

  16. The economic impact of the Department of Energy on the state of New Mexico fiscal year 1997

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lansford, R.R.; Nielsen, T.G.; Schultz, J.

    1998-05-29

    The US Department of Energy (DOE) provides a major source of economic benefits in New Mexico. The agency`s far-reaching economic influence within the state is the focus of this report. Economic benefits arising from the various activities and functions of both DOE and its contractors have accrued to the state continuously for over 50 years. For several years, DOE/Albuquerque Operations Office (AL) and New Mexico State University (NMSU) have maintained inter-industry, input-output modeling capabilities to assess DOE`s impacts on the state of New Mexico and the other substate regions most directly impacted by DOE activities. One of the major usesmore » of input-output techniques is to assess the effects of developments initiated outside the economy such as federal DOE monies that flow into the state, on an economy. The information on which the models are based is updated periodically to ensure the most accurate depiction possible of the economy for the period of reference. For this report, the reference periods are Fiscal Year (FY) 1996 and FY 1997. Total impacts represents both direct and indirect impacts (respending by business), including induced (respending by households) effects. The standard multipliers used in determining impacts result from the inter-industry, input-output models uniquely developed for New Mexico. This report includes seven main sections: (1) introduction; (2) profile of DOE activities in New Mexico; (3) DOE expenditure patterns; (4) measuring DOE/New Mexico`s economic impact; (5) technology transfer within the federal labs funded by DOE/New Mexico; (6) glossary of terms; and (7) technical appendix containing a description of the model. 9 figs., 19 tabs.« less

  17. XFEL OSCILLATOR SIMULATION INCLUDING ANGLE-DEPENDENT CRYSTAL REFLECTIVITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fawley, William; Lindberg, Ryan; Kim, K-J

    The oscillator package within the GINGER FEL simulation code has now been extended to include angle-dependent reflectivity properties of Bragg crystals. Previously, the package was modified to include frequencydependent reflectivity in order to model x-ray FEL oscillators from start-up from shot noise through to saturation. We present a summary of the algorithms used for modeling the crystal reflectivity and radiation propagation outside the undulator, discussing various numerical issues relevant to the domain of high Fresnel number and efficient Hankel transforms. We give some sample XFEL-O simulation results obtained with the angle-dependent reflectivity model, with particular attention directed to the longitudinalmore » and transverse coherence of the radiation output.« less

  18. Techno-Economic Analysis of Integration of Low-Temperature Geothermal Resources for Coal-Fired Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bearden, Mark D.; Davidson, Casie L.; Horner, Jacob A.

    Presented here are the results of a techno-economic (TEA) study of the potential for coupling low-grade geothermal resources to boost the electrical output from coal-fired power plants. This study includes identification of candidate 500 MW subcritical coal-fired power plants in the continental United States, followed by down-selection and characterization of the North Valmy generating station, a Nevada coal-fired plant. Based on site and plant characteristics, ASPEN Plus models were designed to evaluate options to integrate geothermal resources directly into existing processes at North Valmy. Energy outputs and capital costing are presented for numerous hybrid strategies, including integration with Organic Rankinemore » Cycles (ORCs), which currently represent the primary technology for baseload geothermal power generation.« less

  19. ModelTest Server: a web-based tool for the statistical selection of models of nucleotide substitution online

    PubMed Central

    Posada, David

    2006-01-01

    ModelTest server is a web-based application for the selection of models of nucleotide substitution using the program ModelTest. The server takes as input a text file with likelihood scores for the set of candidate models. Models can be selected with hierarchical likelihood ratio tests, or with the Akaike or Bayesian information criteria. The output includes several statistics for the assessment of model selection uncertainty, for model averaging or to estimate the relative importance of model parameters. The server can be accessed at . PMID:16845102

  20. Large-Signal Klystron Simulations Using KLSC

    NASA Astrophysics Data System (ADS)

    Carlsten, B. E.; Ferguson, P.

    1997-05-01

    We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.

  1. A numerical solution for the diffusion equation in hydrogeologic systems

    USGS Publications Warehouse

    Ishii, A.L.; Healy, R.W.; Striegl, Robert G.

    1989-01-01

    The documentation of a computer code for the numerical solution of the linear diffusion equation in one or two dimensions in Cartesian or cylindrical coordinates is presented. Applications of the program include molecular diffusion, heat conduction, and fluid flow in confined systems. The flow media may be anisotropic and heterogeneous. The model is formulated by replacing the continuous linear diffusion equation by discrete finite-difference approximations at each node in a block-centered grid. The resulting matrix equation is solved by the method of preconditioned conjugate gradients. The conjugate gradient method does not require the estimation of iteration parameters and is guaranteed convergent in the absence of rounding error. The matrixes are preconditioned to decrease the steps to convergence. The model allows the specification of any number of boundary conditions for any number of stress periods, and the output of a summary table for selected nodes showing flux and the concentration of the flux quantity for each time step. The model is written in a modular format for ease of modification. The model was verified by comparison of numerical and analytical solutions for cases of molecular diffusion, two-dimensional heat transfer, and axisymmetric radial saturated fluid flow. Application of the model to a hypothetical two-dimensional field situation of gas diffusion in the unsaturated zone is demonstrated. The input and output files are included as a check on program installation. The definition of variables, input requirements, flow chart, and program listing are included in the attachments. (USGS)

  2. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some sample results are compared to data obtained from testing hardware inverters.

  3. Frequency domain model for analysis of paralleled, series-output-connected Mapham inverters

    NASA Technical Reports Server (NTRS)

    Brush, Andrew S.; Sundberg, Richard C.; Button, Robert M.

    1989-01-01

    The Mapham resonant inverter is characterized as a two-port network driven by a selected periodic voltage. The two-port model is then used to model a pair of Mapham inverters connected in series and employing phasor voltage regulation. It is shown that the model is useful for predicting power output in paralleled inverter units, and for predicting harmonic current output of inverter pairs, using standard power flow techniques. Some examples are compared to data obtained from testing hardware inverters.

  4. Uncertainty and sensitivity analysis for photovoltaic system modeling.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hansen, Clifford W.; Pohl, Andrew Phillip; Jordan, Dirk

    2013-12-01

    We report an uncertainty and sensitivity analysis for modeling DC energy from photovoltaic systems. We consider two systems, each comprised of a single module using either crystalline silicon or CdTe cells, and located either at Albuquerque, NM, or Golden, CO. Output from a PV system is predicted by a sequence of models. Uncertainty in the output of each model is quantified by empirical distributions of each model's residuals. We sample these distributions to propagate uncertainty through the sequence of models to obtain an empirical distribution for each PV system's output. We considered models that: (1) translate measured global horizontal, directmore » and global diffuse irradiance to plane-of-array irradiance; (2) estimate effective irradiance from plane-of-array irradiance; (3) predict cell temperature; and (4) estimate DC voltage, current and power. We found that the uncertainty in PV system output to be relatively small, on the order of 1% for daily energy. Four alternative models were considered for the POA irradiance modeling step; we did not find the choice of one of these models to be of great significance. However, we observed that the POA irradiance model introduced a bias of upwards of 5% of daily energy which translates directly to a systematic difference in predicted energy. Sensitivity analyses relate uncertainty in the PV system output to uncertainty arising from each model. We found that the residuals arising from the POA irradiance and the effective irradiance models to be the dominant contributors to residuals for daily energy, for either technology or location considered. This analysis indicates that efforts to reduce the uncertainty in PV system output should focus on improvements to the POA and effective irradiance models.« less

  5. Catchment virtual observatory for sharing flow and transport models outputs: using residence time distribution to compare contrasting catchments

    NASA Astrophysics Data System (ADS)

    Thomas, Zahra; Rousseau-Gueutin, Pauline; Kolbe, Tamara; Abbott, Ben; Marcais, Jean; Peiffer, Stefan; Frei, Sven; Bishop, Kevin; Le Henaff, Geneviève; Squividant, Hervé; Pichelin, Pascal; Pinay, Gilles; de Dreuzy, Jean-Raynald

    2017-04-01

    The distribution of groundwater residence time in a catchment provides synoptic information about catchment functioning (e.g. nutrient retention and removal, hydrograph flashiness). In contrast with interpreted model results, which are often not directly comparable between studies, residence time distribution is a general output that could be used to compare catchment behaviors and test hypotheses about landscape controls on catchment functioning. In this goal, we created a virtual observatory platform called Catchment Virtual Observatory for Sharing Flow and Transport Model Outputs (COnSOrT). The main goal of COnSOrT is to collect outputs from calibrated groundwater models from a wide range of environments. By comparing a wide variety of catchments from different climatic, topographic and hydrogeological contexts, we expect to enhance understanding of catchment connectivity, resilience to anthropogenic disturbance, and overall functioning. The web-based observatory will also provide software tools to analyze model outputs. The observatory will enable modelers to test their models in a wide range of catchment environments to evaluate the generality of their findings and robustness of their post-processing methods. Researchers with calibrated numerical models can benefit from observatory by using the post-processing methods to implement a new approach to analyzing their data. Field scientists interested in contributing data could invite modelers associated with the observatory to test their models against observed catchment behavior. COnSOrT will allow meta-analyses with community contributions to generate new understanding and identify promising pathways forward to moving beyond single catchment ecohydrology. Keywords: Residence time distribution, Models outputs, Catchment hydrology, Inter-catchment comparison

  6. High power RF solid state power amplifier system

    NASA Technical Reports Server (NTRS)

    Sims, III, William Herbert (Inventor); Chavers, Donald Gregory (Inventor); Richeson, James J. (Inventor)

    2011-01-01

    A high power, high frequency, solid state power amplifier system includes a plurality of input multiple port splitters for receiving a high-frequency input and for dividing the input into a plurality of outputs and a plurality of solid state amplifier units. Each amplifier unit includes a plurality of amplifiers, and each amplifier is individually connected to one of the outputs of multiport splitters and produces a corresponding amplified output. A plurality of multiport combiners combine the amplified outputs of the amplifiers of each of the amplifier units to a combined output. Automatic level control protection circuitry protects the amplifiers and maintains a substantial constant amplifier power output.

  7. Fuzzy logic control and optimization system

    DOEpatents

    Lou, Xinsheng [West Hartford, CT

    2012-04-17

    A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.

  8. Impact of device level faults in a digital avionic processor

    NASA Technical Reports Server (NTRS)

    Suk, Ho Kim

    1989-01-01

    This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.

  9. System for memorizing maximum values

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1992-01-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  10. System for memorizing maximum values

    NASA Astrophysics Data System (ADS)

    Bozeman, Richard J., Jr.

    1992-08-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either linear or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  11. System for Memorizing Maximum Values

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1996-01-01

    The invention discloses a system capable of memorizing maximum sensed values. The system includes conditioning circuitry which receives the analog output signal from a sensor transducer. The conditioning circuitry rectifies and filters the analog signal and provides an input signal to a digital driver, which may be either liner or logarithmic. The driver converts the analog signal to discrete digital values, which in turn triggers an output signal on one of a plurality of driver output lines n. The particular output lines selected is dependent on the converted digital value. A microfuse memory device connects across the driver output lines, with n segments. Each segment is associated with one driver output line, and includes a microfuse that is blown when a signal appears on the associated driver output line.

  12. Moment-based metrics for global sensitivity analysis of hydrological systems

    NASA Astrophysics Data System (ADS)

    Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto

    2017-12-01

    We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

  13. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change predicted by the ESMs, with changes of -480 to 230 Pg-C from 2005-2100. All models that reported NPP and heterotrophic respiration showed increases in both of these processes over the simulated period. In two of the models, soils switched from a global sink for carbon to a net source. Of the remaining models, half predicted that soils were a sink for carbon throughout the time period and the other half predicted that soils were a carbon source.. Heterotrophic respiration in most of the models from 2005-2100 was well explained by a reduced complexity model dependent on soil carbon, soil temperature, and soil moisture (R2 values >0.74). However, MPI-ESM (R2=0.45) showed only moderate fit to this model. Our analysis shows that soil carbon predictions from ESMs are highly variable, with much of this variability due to model parameterization and variations in driving variables. Furthermore, our reduced complexity models show that most variation in ESM outputs can be explained by a simple one-pool model with a small number of drivers and parameters. Therefore, agreement between soil carbon predictions across models could improve substantially by reconciling differences in driving variables and the parameters that link soil carbon with environmental drivers. However it is unclear if this model agreement would reflect what is truly happening in the Earth system.

  14. The Pelagics Habitat Analysis Module (PHAM): Decision Support Tools for Pelagic Fisheries

    NASA Astrophysics Data System (ADS)

    Armstrong, E. M.; Harrison, D. P.; Kiefer, D.; O'Brien, F.; Hinton, M.; Kohin, S.; Snyder, S.

    2009-12-01

    PHAM is a project funded by NASA to integrate satellite imagery and circulation models into the management of commercial and threatened pelagic species. Specifically, the project merges data from fishery surveys, and fisheries catch and effort data with satellite imagery and circulation models to define the habitat of each species. This new information on habitat will then be used to inform population distribution and models of population dynamics that are used for management. During the first year of the project, we created two prototype modules. One module, which was developed for the Inter-American Tropical Tuna Commission, is designed to help improve information available to manage the tuna fisheries of the eastern Pacific Ocean. The other module, which was developed for the Coastal Pelagics Division of the Southwest Fishery Science Center, assists management of by-catch of mako, blue, and thresher sharks along the Californian coast. Both modules were built with the EASy marine geographic information system, which provides a 4 dimensional (latitude, longitude, depth, and time) home for integration of the data. The projects currently provide tools for automated downloading and geo-referencing of satellite imagery of sea surface temperature, height, and chlorophyll concentrations; output from JPL’s ECCO2 global circulation model and its ROM California current model; and gridded data from fisheries and fishery surveys. It also provides statistical tools for defining species habitat from these and other types of environmental data. These tools include unbalanced ANOVA, EOF analysis of satellite imagery, and multivariate search routines for fitting fishery data to transforms of the environmental data. Output from the projects consists of dynamic maps of the distribution of the species that are driven by the time series of satellite imagery and output from the circulation models. It also includes relationships between environmental variables and recruitment. During the talk, we will briefly demonstrate features of the software and present the results of our analyses of habitats.

  15. The Dynamic General Vegetation Model MC1 over the United States and Canada at a 5-arcminute resolution: model inputs and outputs

    Treesearch

    Ray Drapek; John B. Kim; Ronald P. Neilson

    2015-01-01

    Land managers need to include climate change in their decisionmaking, but the climate models that project future climates operate at spatial scales that are too coarse to be of direct use. To create a dataset more useful to managers, soil and historical climate were assembled for the United States and Canada at a 5-arcminute grid resolution. Nine CMIP3 future climate...

  16. Turbulence simulation mechanization for Space Shuttle Orbiter dynamics and control studies

    NASA Technical Reports Server (NTRS)

    Tatom, F. B.; King, R. L.

    1977-01-01

    The current version of the NASA turbulent simulation model in the form of a digital computer program, TBMOD, is described. The logic of the program is discussed and all inputs and outputs are defined. An alternate method of shear simulation suitable for incorporation into the model is presented. The simulation is based on a von Karman spectrum and the assumption of isotropy. The resulting spectral density functions for the shear model are included.

  17. GridPV Toolbox

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Broderick, Robert; Quiroz, Jimmy; Grijalva, Santiago

    2014-07-15

    Matlab Toolbox for simulating the impact of solar energy on the distribution grid. The majority of the functions are useful for interfacing OpenDSS and MATLAB, and they are of generic use for commanding OpenDSS from MATLAB and retrieving GridPV Toolbox information from simulations. A set of functions is also included for modeling PV plant output and setting up the PV plant in the OpenDSS simulation. The toolbox contains functions for modeling the OpenDSS distribution feeder on satellite images with GPS coordinates. Finally, example simulations functions are included to show potential uses of the toolbox functions.

  18. QUANTIFYING THE CLIMATE, AIR QUALITY AND HEALTH BENEFITS OF IMPROVED COOKSTOVES: AN INTEGRATED LABORATORY, FIELD AND MODELING STUDY

    EPA Science Inventory

    Expected results and outputs include: extensive dataset of in-field and laboratory emissions data for traditional and improved cookstoves; parameterization to predict cookstove emissions from drive cycle data; indoor and personal exposure data for traditional and improved cook...

  19. Monitoring the Condition of Education.

    ERIC Educational Resources Information Center

    Buccino, Alphonse

    Five categories of data collection are recommended for monitoring the quality of education: (1) outcomes, based on an input-output model, including data from student testing and credentials and degrees; (2) participation--who is served by education; (3) resources available to education; (4) long-term impact of education on work, income,…

  20. Gain control through divisive inhibition prevents abrupt transition to chaos in a neural mass model.

    PubMed

    Papasavvas, Christoforos A; Wang, Yujiang; Trevelyan, Andrew J; Kaiser, Marcus

    2015-09-01

    Experimental results suggest that there are two distinct mechanisms of inhibition in cortical neuronal networks: subtractive and divisive inhibition. They modulate the input-output function of their target neurons either by increasing the input that is needed to reach maximum output or by reducing the gain and the value of maximum output itself, respectively. However, the role of these mechanisms on the dynamics of the network is poorly understood. We introduce a novel population model and numerically investigate the influence of divisive inhibition on network dynamics. Specifically, we focus on the transitions from a state of regular oscillations to a state of chaotic dynamics via period-doubling bifurcations. The model with divisive inhibition exhibits a universal transition rate to chaos (Feigenbaum behavior). In contrast, in an equivalent model without divisive inhibition, transition rates to chaos are not bounded by the universal constant (non-Feigenbaum behavior). This non-Feigenbaum behavior, when only subtractive inhibition is present, is linked to the interaction of bifurcation curves in the parameter space. Indeed, searching the parameter space showed that such interactions are impossible when divisive inhibition is included. Therefore, divisive inhibition prevents non-Feigenbaum behavior and, consequently, any abrupt transition to chaos. The results suggest that the divisive inhibition in neuronal networks could play a crucial role in keeping the states of order and chaos well separated and in preventing the onset of pathological neural dynamics.

  1. Comparative-effectiveness research to aid population decision making by relating clinical outcomes and quality-adjusted life years.

    PubMed

    Campbell, Jonathan D; Zerzan, Judy; Garrison, Louis P; Libby, Anne M

    2013-04-01

    Comparative-effectiveness research (CER) at the population level is missing standardized approaches to quantify and weigh interventions in terms of their clinical risks, benefits, and uncertainty. We proposed an adapted CER framework for population decision making, provided example displays of the outputs, and discussed the implications for population decision makers. Building on decision-analytical modeling but excluding cost, we proposed a 2-step approach to CER that explicitly compared interventions in terms of clinical risks and benefits and linked this evidence to the quality-adjusted life year (QALY). The first step was a traditional intervention-specific evidence synthesis of risks and benefits. The second step was a decision-analytical model to simulate intervention-specific progression of disease over an appropriate time. The output was the ability to compare and quantitatively link clinical outcomes with QALYs. The outputs from these CER models include clinical risks, benefits, and QALYs over flexible and relevant time horizons. This approach yields an explicit, structured, and consistent quantitative framework to weigh all relevant clinical measures. Population decision makers can use this modeling framework and QALYs to aid in their judgment of the individual and collective risks and benefits of the alternatives over time. Future research should study effective communication of these domains for stakeholders. Copyright © 2013 Elsevier HS Journals, Inc. All rights reserved.

  2. Study of hydrological extremes - floods and droughts in global river basins using satellite data and model output

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Fayne, J.; Bolten, J. D.

    2016-12-01

    We will use satellite data from TRMM (Tropical Rainfall Measurement Mission), AMSR (Advanced Microwave Scanning Radiometer), GRACE (Gravity Recovery and Climate Experiment) and MODIS (Moderate Resolution Spectroradiometer) and model output from NASA GLDAS (Global Land Data Assimilation System) to understand the linkages between hydrological variables. These hydrological variables include precipitation soil moisture vegetation index surface temperature ET and total water. We will present results for major river basins such as Amazon, Colorado, Mississippi, California, Danube, Nile, Congo, Yangtze Mekong, Murray-Darling and Ganga-Brahmaputra.The major floods and droughts in these watersheds will be mapped in time and space using the satellite data and model outputs mentioned above. We will analyze the various hydrological variables and conduct a synergistic study during times of flood and droughts. In order to compare hydrological variables between river basins with vastly different climate and land use we construct an index that is scaled by the climatology. This allows us to compare across different climate, topography, soils and land use regimes. The analysis shows that the hydrological variables derived from satellite data and NASA models clearly reflect the hydrological extremes. This is especially true when data from different sensors are analyzed together - for example rainfall data from TRMM and total water data from GRACE. Such analyses will help to construct prediction tools for water resources applications.

  3. Comparing Internet Probing Methodologies Through an Analysis of Large Dynamic Graphs

    DTIC Science & Technology

    2014-06-01

    comparable Internet topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical...topologies in less time. We compare these by modeling union of traceroute outputs as graphs, and using standard graph theoretical measurements as well...We compare these by modeling union of traceroute outputs as graphs, and study the graphs by using vertex and edge count, average vertex degree

  4. A Hierarchical multi-input and output Bi-GRU Model for Sentiment Analysis on Customer Reviews

    NASA Astrophysics Data System (ADS)

    Zhang, Liujie; Zhou, Yanquan; Duan, Xiuyu; Chen, Ruiqi

    2018-03-01

    Multi-label sentiment classification on customer reviews is a practical challenging task in Natural Language Processing. In this paper, we propose a hierarchical multi-input and output model based bi-directional recurrent neural network, which both considers the semantic and lexical information of emotional expression. Our model applies two independent Bi-GRU layer to generate part of speech and sentence representation. Then the lexical information is considered via attention over output of softmax activation on part of speech representation. In addition, we combine probability of auxiliary labels as feature with hidden layer to capturing crucial correlation between output labels. The experimental result shows that our model is computationally efficient and achieves breakthrough improvements on customer reviews dataset.

  5. Available pressure amplitude of linear compressor based on phasor triangle model

    NASA Astrophysics Data System (ADS)

    Duan, C. X.; Jiang, X.; Zhi, X. Q.; You, X. K.; Qiu, L. M.

    2017-12-01

    The linear compressor for cryocoolers possess the advantages of long-life operation, high efficiency, low vibration and compact structure. It is significant to study the match mechanisms between the compressor and the cold finger, which determines the working efficiency of the cryocooler. However, the output characteristics of linear compressor are complicated since it is affected by many interacting parameters. The existing matching methods are simplified and mainly focus on the compressor efficiency and output acoustic power, while neglecting the important output parameter of pressure amplitude. In this study, a phasor triangle model basing on analyzing the forces of the piston is proposed. It can be used to predict not only the output acoustic power, the efficiency, but also the pressure amplitude of the linear compressor. Calculated results agree well with the measurement results of the experiment. By this phasor triangle model, the theoretical maximum output pressure amplitude of the linear compressor can be calculated simply based on a known charging pressure and operating frequency. Compared with the mechanical and electrical model of the linear compressor, the new model can provide an intuitionistic understanding on the match mechanism with faster computational process. The model can also explain the experimental phenomenon of the proportional relationship between the output pressure amplitude and the piston displacement in experiments. By further model analysis, such phenomenon is confirmed as an expression of the unmatched design of the compressor. The phasor triangle model may provide an alternative method for the compressor design and matching with the cold finger.

  6. Basolateral amygdala and stress-induced hyperexcitability affect motivated behaviors and addiction.

    PubMed

    Sharp, B M

    2017-08-08

    The amygdala integrates and processes incoming information pertinent to reward and to emotions such as fear and anxiety that promote survival by warning of potential danger. Basolateral amygdala (BLA) communicates bi-directionally with brain regions affecting cognition, motivation and stress responses including prefrontal cortex, hippocampus, nucleus accumbens and hindbrain regions that trigger norepinephrine-mediated stress responses. Disruption of intrinsic amygdala and BLA regulatory neurocircuits is often caused by dysfunctional neuroplasticity frequently due to molecular alterations in local GABAergic circuits and principal glutamatergic output neurons. Changes in local regulation of BLA excitability underlie behavioral disturbances characteristic of disorders including post-traumatic stress syndrome (PTSD), autism, attention-deficit hyperactivity disorder (ADHD) and stress-induced relapse to drug use. In this Review, we discuss molecular mechanisms and neural circuits that regulate physiological and stress-induced dysfunction of BLA/amygdala and its principal output neurons. We consider effects of stress on motivated behaviors that depend on BLA; these include drug taking and drug seeking, with emphasis on nicotine-dependent behaviors. Throughout, we take a translational approach by integrating decades of addiction research on animal models and human trials. We show that changes in BLA function identified in animal addiction models illuminate human brain imaging and behavioral studies by more precisely delineating BLA mechanisms. In summary, BLA is required to promote responding for natural reward and respond to second-order drug-conditioned cues; reinstate cue-dependent drug seeking; express stress-enhanced reacquisition of nicotine intake; and drive anxiety and fear. Converging evidence indicates that chronic stress causes BLA principal output neurons to become hyperexcitable.

  7. Basolateral amygdala and stress-induced hyperexcitability affect motivated behaviors and addiction

    PubMed Central

    Sharp, B M

    2017-01-01

    The amygdala integrates and processes incoming information pertinent to reward and to emotions such as fear and anxiety that promote survival by warning of potential danger. Basolateral amygdala (BLA) communicates bi-directionally with brain regions affecting cognition, motivation and stress responses including prefrontal cortex, hippocampus, nucleus accumbens and hindbrain regions that trigger norepinephrine-mediated stress responses. Disruption of intrinsic amygdala and BLA regulatory neurocircuits is often caused by dysfunctional neuroplasticity frequently due to molecular alterations in local GABAergic circuits and principal glutamatergic output neurons. Changes in local regulation of BLA excitability underlie behavioral disturbances characteristic of disorders including post-traumatic stress syndrome (PTSD), autism, attention-deficit hyperactivity disorder (ADHD) and stress-induced relapse to drug use. In this Review, we discuss molecular mechanisms and neural circuits that regulate physiological and stress-induced dysfunction of BLA/amygdala and its principal output neurons. We consider effects of stress on motivated behaviors that depend on BLA; these include drug taking and drug seeking, with emphasis on nicotine-dependent behaviors. Throughout, we take a translational approach by integrating decades of addiction research on animal models and human trials. We show that changes in BLA function identified in animal addiction models illuminate human brain imaging and behavioral studies by more precisely delineating BLA mechanisms. In summary, BLA is required to promote responding for natural reward and respond to second-order drug-conditioned cues; reinstate cue-dependent drug seeking; express stress-enhanced reacquisition of nicotine intake; and drive anxiety and fear. Converging evidence indicates that chronic stress causes BLA principal output neurons to become hyperexcitable. PMID:28786979

  8. Heart Performance Determination by Visualization in Larval Fishes: Influence of Alternative Models for Heart Shape and Volume

    PubMed Central

    Perrichon, Prescilla; Grosell, Martin; Burggren, Warren W.

    2017-01-01

    Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model) applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output. PMID:28725199

  9. International trade inoperability input-output model (IT-IIM): theory and application.

    PubMed

    Jung, Jeesang; Santos, Joost R; Haimes, Yacov Y

    2009-01-01

    The inoperability input-output model (IIM) has been used for analyzing disruptions due to man-made or natural disasters that can adversely affect the operation of economic systems or critical infrastructures. Taking economic perturbation for each sector as inputs, the IIM provides the degree of economic production impacts on all industry sectors as the outputs for the model. The current version of the IIM does not provide a separate analysis for the international trade component of the inoperability. If an important port of entry (e.g., Port of Los Angeles) is disrupted, then international trade inoperability becomes a highly relevant subject for analysis. To complement the current IIM, this article develops the International Trade-IIM (IT-IIM). The IT-IIM investigates the resulting international trade inoperability for all industry sectors resulting from disruptions to a major port of entry. Similar to traditional IIM analysis, the inoperability metrics that the IT-IIM provides can be used to prioritize economic sectors based on the losses they could potentially incur. The IT-IIM is used to analyze two types of direct perturbations: (1) the reduced capacity of ports of entry, including harbors and airports (e.g., a shutdown of any port of entry); and (2) restrictions on commercial goods that foreign countries trade with the base nation (e.g., embargo).

  10. Robust Real-Time Wide-Area Differential GPS Navigation

    NASA Technical Reports Server (NTRS)

    Yunck, Thomas P. (Inventor); Bertiger, William I. (Inventor); Lichten, Stephen M. (Inventor); Mannucci, Anthony J. (Inventor); Muellerschoen, Ronald J. (Inventor); Wu, Sien-Chong (Inventor)

    1998-01-01

    The present invention provides a method and a device for providing superior differential GPS positioning data. The system includes a group of GPS receiving ground stations covering a wide area of the Earth's surface. Unlike other differential GPS systems wherein the known position of each ground station is used to geometrically compute an ephemeris for each GPS satellite. the present system utilizes real-time computation of satellite orbits based on GPS data received from fixed ground stations through a Kalman-type filter/smoother whose output adjusts a real-time orbital model. ne orbital model produces and outputs orbital corrections allowing satellite ephemerides to be known with considerable greater accuracy than from die GPS system broadcasts. The modeled orbits are propagated ahead in time and differenced with actual pseudorange data to compute clock offsets at rapid intervals to compensate for SA clock dither. The orbital and dock calculations are based on dual frequency GPS data which allow computation of estimated signal delay at each ionospheric point. These delay data are used in real-time to construct and update an ionospheric shell map of total electron content which is output as part of the orbital correction data. thereby allowing single frequency users to estimate ionospheric delay with an accuracy approaching that of dual frequency users.

  11. Temperature performance analysis of intersubband Raman laser in quantum cascade structures

    NASA Astrophysics Data System (ADS)

    Yousefvand, Hossein Reza

    2017-06-01

    In this paper we investigate the effects of temperature on the output characteristics of the intersubband Raman laser (RL) that integrated monolithically with a quantum cascade (QC) laser as an intracavity optical pump. The laser bandstructure is calculated by a self-consistent solution of Schrodinger-Poisson equations, and the employed physical model of carrier transport is based on a five-level carrier scattering rates; a two-level rate equations for the pump laser and a three-level scattering rates to include the stimulated Raman process in the RL. The temperature dependency of the relevant physical effects such as thermal broadening of the intersubband transitions (ISTs), thermally activated phonon emission lifetimes, and thermal backfilling of the final lasing state of the Raman process from the injector are included in the model. Using the presented model, the steady-state, small-signal modulation response and transient device characteristics are investigated for a range of sink temperatures (80-220 K). It is found that the main characteristics of the device such as output power, threshold current, Raman modal gain, turn-on delay time and 3-dB optical bandwidth are remarkably affected by the temperature.

  12. Advanced single permanent magnet axipolar ironless stator ac motor for electric passenger vehicles

    NASA Technical Reports Server (NTRS)

    Beauchamp, E. D.; Hadfield, J. R.; Wuertz, K. L.

    1983-01-01

    A program was conducted to design and develop an advanced-concept motor specifically created for propulsion of electric vehicles with increased range, reduced energy consumption, and reduced life-cycle costs in comparison with conventional systems. The motor developed is a brushless, dc, rare-earth cobalt, permanent magnet, axial air gap inductor machine that uses an ironless stator. Air cooling is inherent provided by the centrifugal-fan action of the rotor poles. An extensive design phase was conducted, which included analysis of the system performance versus the SAE J227a(D) driving cycle. A proof-of-principle model was developed and tested, and a functional model was developed and tested. Full generator-level testing was conducted on the functional model, recording electromagnetic, thermal, aerodynamic, and acoustic noise data. The machine demonstrated 20.3 kW output at 1466 rad/s and 160 dc. The novel ironless stator demonstated the capability to continuously operate at peak current. The projected system performance based on the use of a transistor inverter is 23.6 kW output power at 1466 rad/s and 83.3 percent efficiency. Design areas of concern regarding electric vehicle applications include the inherently high windage loss and rotor inertia.

  13. Modeling of a Ne/Xe dielectric barrier discharge excilamp for improvement of VUV radiation production

    NASA Astrophysics Data System (ADS)

    Khodja, K.; Belasri, A.; Loukil, H.

    2017-08-01

    This work is devoted to excimer lamp efficiency optimization by using a homogenous discharge model of a dielectric barrier discharge in a Ne-Xe mixture. The model includes the plasma chemistry, electrical circuit, and Boltzmann equation. In this paper, we are particularly interested in the electrical and kinetic properties and light output generated by the DBD. Xenon is chosen for its high luminescence in the range of vacuum UV radiation around 173 nm. Our study is motivated by interest in this type of discharge in many industrial applications, including the achievement of high light output lamps. In this work, we used an applied sinusoidal voltage, frequency, gas pressure, and concentration in the ranges of 2-8 kV, 10-200 kHz, 100-800 Torr, and 10-50%, respectively. The analyzed results concern the voltage V p across the gap, the dielectric voltage V d, the discharge current I, and the particles densities. We also investigated the effect of the electric parameters and xenon concentration on the lamp efficiency. This investigation will allow one to find out the appropriate parameters for Ne/Xe DBD excilamps to improve their efficiency.

  14. Charge control microcomputer device for vehicles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morishita, M.; Kouge, S.

    1986-10-14

    This patent describes a charge control microcomputer device for a vehicle, comprising: speed changing means for transmitting the output torque of an engine. The speed changing means includes a slip clutch means having an output with a variable slippage amount with respect to its input and controlled in accordance with an operating instruction. The speed changing means further includes a speed change gear for changing the rotational speed input thereto at an output thereto, the speed change gear receiving the output of the slip clutch means; a charging generator driven by the output of the speed change gear; a batterymore » charged by an output voltage of the charging generator; a voltage regulator for controlling the output voltage of the charging generator to a predetermined value; an engine controlling microcomputer for receiving data from the engine, to control the engine, the engine data comprising at least an engine speed signal; a charge control microcomputer for processing engine data from the engine controlling microcomputer and charge system data including terminal voltage data from the battery and generated voltage data from the changing generator; and a display unit for displaying detection data, including fault detection data, form the charge control microcomputer.« less

  15. Inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  16. The effect of output-input isolation on the scaling and energy consumption of all-spin logic devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Jiaxi; Haratipour, Nazila; Koester, Steven J., E-mail: skoester@umn.edu

    All-spin logic (ASL) is a novel approach for digital logic applications wherein spin is used as the state variable instead of charge. One of the challenges in realizing a practical ASL system is the need to ensure non-reciprocity, meaning the information flows from input to output, not vice versa. One approach described previously, is to introduce an asymmetric ground contact, and while this approach was shown to be effective, it remains unclear as to the optimal approach for achieving non-reciprocity in ASL. In this study, we quantitatively analyze techniques to achieve non-reciprocity in ASL devices, and we specifically compare themore » effect of using asymmetric ground position and dipole-coupled output/input isolation. For this analysis, we simulate the switching dynamics of multiple-stage logic devices with FePt and FePd perpendicular magnetic anisotropy materials using a combination of a matrix-based spin circuit model coupled to the Landau–Lifshitz–Gilbert equation. The dipole field is included in this model and can act as both a desirable means of coupling magnets and a source of noise. The dynamic energy consumption has been calculated for these schemes, as a function of input/output magnet separation, and the results show that using a scheme that electrically isolates logic stages produces superior non-reciprocity, thus allowing both improved scaling and reduced energy consumption.« less

  17. A Computational Model of Torque Generation: Neural, Contractile, Metabolic and Musculoskeletal Components

    PubMed Central

    Callahan, Damien M.; Umberger, Brian R.; Kent-Braun, Jane A.

    2013-01-01

    The pathway of voluntary joint torque production includes motor neuron recruitment and rate-coding, sarcolemmal depolarization and calcium release by the sarcoplasmic reticulum, force generation by motor proteins within skeletal muscle, and force transmission by tendon across the joint. The direct source of energetic support for this process is ATP hydrolysis. It is possible to examine portions of this physiologic pathway using various in vivo and in vitro techniques, but an integrated view of the multiple processes that ultimately impact joint torque remains elusive. To address this gap, we present a comprehensive computational model of the combined neuromuscular and musculoskeletal systems that includes novel components related to intracellular bioenergetics function. Components representing excitatory drive, muscle activation, force generation, metabolic perturbations, and torque production during voluntary human ankle dorsiflexion were constructed, using a combination of experimentally-derived data and literature values. Simulation results were validated by comparison with torque and metabolic data obtained in vivo. The model successfully predicted peak and submaximal voluntary and electrically-elicited torque output, and accurately simulated the metabolic perturbations associated with voluntary contractions. This novel, comprehensive model could be used to better understand impact of global effectors such as age and disease on various components of the neuromuscular system, and ultimately, voluntary torque output. PMID:23405245

  18. Midbrain stimulation-evoked lumbar spinal activity in the adult decerebrate mouse.

    PubMed

    Stecina, Katinka

    2017-08-15

    Genetic techniques rendering murine models a popular choice for neuroscience research has led to important insights on neural networks controlling locomotor function. Using genetically altered mouse models for in vivo, electrophysiological studies in the adult state could validate key principles of locomotor network organization that have been described in neonatal, in vitro preparations. The experimental model presented here describes a decerebrate, in vivo adult mouse preparation in which focal, electrical midbrain stimulation was combined with monitoring lumbar neural activity and motor output after pre-collicular decerebration and neuromuscular blockade. Lumbar cord dorsum potentials (in 9/10 animals) and motoneuron output (in 3/5 animals) including fictive locomotion, was achieved by focal midbrain stimulation. The stimulation electrode locations could be reconstructed (in 6/7 animals) thereby allowing anatomical identification of the stimulated supraspinal regions. This preparation allows for concomitant recording or stimulation in the spinal cord and in the mid/hindbrain of adult mice. It differs from other methods used in the past with adult mice as it does not require pharmacological manipulation of neural excitability in order to generate motor output. Midbrain stimulation can consistently be used for inducing lumbar neural activity in adult mice under neuromuscular blockade. This model is suited for examination of brain-spinal connectivity and it may benefit a wide range of fields depending on the features of the genetically modified mouse models used in combination with the presented methods. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. AITRAC: Augmented Interactive Transient Radiation Analysis by Computer. User's information manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1977-10-01

    AITRAC is a program designed for on-line, interactive, DC, and transient analysis of electronic circuits. The program solves linear and nonlinear simultaneous equations which characterize the mathematical models used to predict circuit response. The program features 100 external node--200 branch capability; conversional, free-format input language; built-in junction, FET, MOS, and switch models; sparse matrix algorithm with extended-precision H matrix and T vector calculations, for fast and accurate execution; linear transconductances: beta, GM, MU, ZM; accurate and fast radiation effects analysis; special interface for user-defined equations; selective control of multiple outputs; graphical outputs in wide and narrow formats; and on-line parametermore » modification capability. The user describes the problem by entering the circuit topology and part parameters. The program then automatically generates and solves the circuit equations, providing the user with printed or plotted output. The circuit topology and/or part values may then be changed by the user, and a new analysis, requested. Circuit descriptions may be saved on disk files for storage and later use. The program contains built-in standard models for resistors, voltage and current sources, capacitors, inductors including mutual couplings, switches, junction diodes and transistors, FETS, and MOS devices. Nonstandard models may be constructed from standard models or by using the special equations interface. Time functions may be described by straight-line segments or by sine, damped sine, and exponential functions. 42 figures, 1 table. (RWR)« less

  20. Multistage Force Amplification of Piezoelectric Stacks

    NASA Technical Reports Server (NTRS)

    Xu, Tian-Bing (Inventor); Siochi, Emilie J. (Inventor); Zuo, Lei (Inventor); Jiang, Xiaoning (Inventor); Kang, Jin Ho (Inventor)

    2015-01-01

    Embodiments of the disclosure include an apparatus and methods for using a piezoelectric device, that includes an outer flextensional casing, a first cell and a last cell serially coupled to each other and coupled to the outer flextensional casing such that each cell having a flextensional cell structure and each cell receives an input force and provides an output force that is amplified based on the input force. The apparatus further includes a piezoelectric stack coupled to each cell such that the piezoelectric stack of each cell provides piezoelectric energy based on the output force for each cell. Further, the last cell receives an input force that is the output force from the first cell and the last cell provides an output apparatus force In addition, the piezoelectric energy harvested is based on the output apparatus force. Moreover, the apparatus provides displacement based on the output apparatus force.

  1. Structural Optimization for Wideband Flexoelectric Energy Harvester Using Bulk Paraelectric Ba0.6Sr0.4TiO3

    NASA Astrophysics Data System (ADS)

    Kumar, Anuruddh; Chauhan, Aditya; Vaish, Rahul; Kumar, Rajeev; Jain, Satish Chandra

    2018-01-01

    Flexoelectricity is a phenomenon which allows all crystalline dielectric materials to exhibit strain-induced polarization. With recent articles reporting giant flexoelectric coupling coefficients for various ferroelectric materials, this field must be duly investigated for its application merits. In this study, a wide-band linear energy harvesting device has been proposed using Ba0.6Sr0.4TiO3 ceramic. Both structural and material parameters were scrutinized for an optimized approach. Dynamic analysis was performed using finite element modeling to evaluate several important parameters including beam deflection, open circuit voltage and net power output. It was revealed that open circuit voltage and net power output lack correlation. Further, power output lacks a dependency on optimized width ratios, with the highest power output of 0.07 μW being observed for a width ratio of 0.33 closely followed by ratios of 0.2 and 0.5 (˜0.07 μW) each. The resulting power was generated at discrete (resonant) frequencies lacking a broadband structure. A compound design with integrated beams was proposed to overcome this drawback. The finalized design is capable of a maximum power output of >0.04 μW with an operational frequency of 90-110 Hz, thus allowing for a higher power output in a broader frequency range.

  2. Effects of climate change on the economic output of the Longjing-43 tea tree, 1972-2013.

    PubMed

    Lou, Weiping; Sun, Shanlei; Wu, Lihong; Sun, Ke

    2015-05-01

    Based on phenological and economic output models established and meteorological data from 1972 to 2013, changes in the phenology, frost risk, and economic output of the Longjing-43 tea tree in the Yuezhou Longjing tea production area of China were evaluated. As the local climate has changed, the beginning dates of tea bud and leaf plucking of this cultivar in all five counties studied has advanced significantly by -1.28 to -0.88 days/decade, with no significant change in the risk of frost. The main tea-producing stages in the tea production cycle include the plucking periods for superfine, grade 1, and grade 2 buds and leaves. Among the five bud and leaf grades, the economic output of the plucking periods for superfine and grade 1 decreased significantly, that for grade 2 showed no significant change, and those for grades 3 and 4 increased significantly. The economic output of large-area tea plantations employing an average of 45 workers per hectare and producing superfine to grade 2 buds and leaves were significantly reduced by 6,745-8,829 yuan/decade/ha, depending on the county. Those tea farmers who planted tea trees on their own small land holdings and produced superfine to grade 4 tea buds and leaves themselves experienced no significant decline in economic output.

  3. DC to DC power converters and methods of controlling the same

    DOEpatents

    Steigerwald, Robert Louis; Elasser, Ahmed; Sabate, Juan Antonio; Todorovic, Maja Harfman; Agamy, Mohammed

    2012-12-11

    A power generation system configured to provide direct current (DC) power to a DC link is described. The system includes a first power generation unit configured to output DC power. The system also includes a first DC to DC converter comprising an input section and an output section. The output section of the first DC to DC converter is coupled in series with the first power generation unit. The first DC to DC converter is configured to process a first portion of the DC power output by the first power generation unit and to provide an unprocessed second portion of the DC power output of the first power generation unit to the output section.

  4. Flatness-based model inverse for feed-forward braking control

    NASA Astrophysics Data System (ADS)

    de Vries, Edwin; Fehn, Achim; Rixen, Daniel

    2010-12-01

    For modern cars an increasing number of driver assistance systems have been developed. Some of these systems interfere/assist with the braking of a car. Here, a brake actuation algorithm for each individual wheel that can respond to both driver inputs and artificial vehicle deceleration set points is developed. The algorithm consists of a feed-forward control that ensures, within the modelled system plant, the optimal behaviour of the vehicle. For the quarter-car model with LuGre-tyre behavioural model, an inverse model can be derived using v x as the 'flat output', that is, the input for the inverse model. A number of time derivatives of the flat output are required to calculate the model input, brake torque. Polynomial trajectory planning provides the needed time derivatives of the deceleration request. The transition time of the planning can be adjusted to meet actuator constraints. It is shown that the output of the trajectory planning would ripple and introduce a time delay when a gradual continuous increase of deceleration is requested by the driver. Derivative filters are then considered: the Bessel filter provides the best symmetry in its step response. A filter of same order and with negative real-poles is also used, exhibiting no overshoot nor ringing. For these reasons, the 'real-poles' filter would be preferred over the Bessel filter. The half-car model can be used to predict the change in normal load on the front and rear axle due to the pitching of the vehicle. The anticipated dynamic variation of the wheel load can be included in the inverse model, even though it is based on a quarter-car. Brake force distribution proportional to normal load is established. It provides more natural and simpler equations than a fixed force ratio strategy.

  5. Modelling the distribution of chickens, ducks, and geese in China

    USGS Publications Warehouse

    Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.

  6. Estimation and Modelling of Land Surface Temperature Using Landsat 7 ETM+ Images and Fuzzy System Techniques

    NASA Astrophysics Data System (ADS)

    Bisht, K.; Dodamani, S. S.

    2016-12-01

    Modelling of Land Surface Temperature is essential for short term and long term management of environmental studies and management activities of the Earth's resources. The objective of this research is to estimate and model Land Surface Temperatures (LST). For this purpose, Landsat 7 ETM+ images period from 2007 to 2012 were used for retrieving LST and processed through MATLAB software using Mamdani fuzzy inference systems (MFIS), which includes pre-monsoon and post-monsoon LST in the fuzzy model. The Mangalore City of Karnataka state, India has been taken for this research work. Fuzzy model inputs are considered as the pre-monsoon and post-monsoon retrieved temperatures and LST was chosen as output. In order to develop a fuzzy model for LST, seven fuzzy subsets, nineteen rules and one output are considered for the estimation of weekly mean air temperature. These are very low (VL), low (L), medium low (ML), medium (M), medium high (MH), high (H) and very high (VH). The TVX (Surface Temperature Vegetation Index) and the empirical method have provided estimated LST. The study showed that the Fuzzy model M4/7-19-1 (model 4, 7 fuzzy sets, 19 rules and 1 output) which developed over Mangalore City has provided more accurate outcomes than other models (M1, M2, M3, M5). The result of this research was evaluated according to statistical rules. The best correlation coefficient (R) and root mean squared error (RMSE) between estimated and measured values for pre-monsoon and post-monsoon LST found to be 0.966 - 1.607 K and 0.963- 1.623 respectively.

  7. Modelling the distribution of chickens, ducks, and geese in China

    PubMed Central

    Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567

  8. Embedded Model Error Representation and Propagation in Climate Models

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Thornton, P. E.

    2017-12-01

    Over the last decade, parametric uncertainty quantification (UQ) methods have reached a level of maturity, while the same can not be said about representation and quantification of structural or model errors. Lack of characterization of model errors, induced by physical assumptions, phenomenological parameterizations or constitutive laws, is a major handicap in predictive science. In particular, e.g. in climate models, significant computational resources are dedicated to model calibration without gaining improvement in predictive skill. Neglecting model errors during calibration/tuning will lead to overconfident and biased model parameters. At the same time, the most advanced methods accounting for model error merely correct output biases, augmenting model outputs with statistical error terms that can potentially violate physical laws, or make the calibrated model ineffective for extrapolative scenarios. This work will overview a principled path for representing and quantifying model errors, as well as propagating them together with the rest of the predictive uncertainty budget, including data noise, parametric uncertainties and surrogate-related errors. Namely, the model error terms will be embedded in select model components rather than as external corrections. Such embedding ensures consistency with physical constraints on model predictions, and renders calibrated model predictions meaningful and robust with respect to model errors. Besides, in the presence of observational data, the approach can effectively differentiate model structural deficiencies from those of data acquisition. The methodology is implemented in UQ Toolkit (www.sandia.gov/uqtoolkit), relying on a host of available forward and inverse UQ tools. We will demonstrate the application of the technique on few application of interest, including ACME Land Model calibration via a wide range of measurements obtained at select sites.

  9. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  10. Comparative study of DPAL and XPAL systems and selection principal of parameters

    NASA Astrophysics Data System (ADS)

    Huang, Wei; Tan, Rongqing; Li, Zhiyong; Han, Gaoce; Li, Hui

    2016-10-01

    A theoretical model based on common pump structure is proposed to analyze the laser output characteristics of DPAL (Diode pumped alkali vapor laser) and XPAL (Exciplex pumped alkali laser) in this paper. The model predicts that an optical-to-optical efficiency approaching 80% can be achieved for continuous-wave four- and five-XPAL systems with broadband pumping which is several times of pumped linewidth for DPAL. Operation parameters including pumped intensity, temperature, cell' s length, mixed gas concentration, pumped linewidth and output mirror reflectivity are analyzed for DPAL and XPAL systems basing on the kinetic model. The result shows a better performance in Cs-Ar XPAL laser with requirements of relatively high Ar concentration, high pumped intensity and high temperature. Comparatively, for Cs-DPAL laser, lower temperature and lower pumped intensity should be acquired. In addition, the predictions of selection principal of temperature and cell's length are also presented. The conception of the equivalent "alkali areal density" is proposed in this paper. It is defined as the product of the alkali density and cell's length. The result shows that the output characteristics of DPAL (or XPAL) system with the same alkali areal density but different temperatures turn out to be equal. It is the areal density that reflects the potential of DPAL or XPAL systems directly. A more detailed analysis of similar influences of cavity parameters with the same areal density is also presented. The detailed results of continuous-wave DPAL and XPAL performances as a function of pumped laser linewidth and mixed gas pressure are presented along with an analysis of influences of output coupler.

  11. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML)

    PubMed Central

    Lechevalier, D.; Ak, R.; Ferguson, M.; Law, K. H.; Lee, Y.-T. T.; Rachuri, S.

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain. PMID:29202125

  12. Gaussian Process Regression (GPR) Representation in Predictive Model Markup Language (PMML).

    PubMed

    Park, J; Lechevalier, D; Ak, R; Ferguson, M; Law, K H; Lee, Y-T T; Rachuri, S

    2017-01-01

    This paper describes Gaussian process regression (GPR) models presented in predictive model markup language (PMML). PMML is an extensible-markup-language (XML) -based standard language used to represent data-mining and predictive analytic models, as well as pre- and post-processed data. The previous PMML version, PMML 4.2, did not provide capabilities for representing probabilistic (stochastic) machine-learning algorithms that are widely used for constructing predictive models taking the associated uncertainties into consideration. The newly released PMML version 4.3, which includes the GPR model, provides new features: confidence bounds and distribution for the predictive estimations. Both features are needed to establish the foundation for uncertainty quantification analysis. Among various probabilistic machine-learning algorithms, GPR has been widely used for approximating a target function because of its capability of representing complex input and output relationships without predefining a set of basis functions, and predicting a target output with uncertainty quantification. GPR is being employed to various manufacturing data-analytics applications, which necessitates representing this model in a standardized form for easy and rapid employment. In this paper, we present a GPR model and its representation in PMML. Furthermore, we demonstrate a prototype using a real data set in the manufacturing domain.

  13. Evaluation of concentrated space solar arrays using computer modeling. [for spacecraft propulsion and power supplies

    NASA Technical Reports Server (NTRS)

    Rockey, D. E.

    1979-01-01

    A general approach is developed for predicting the power output of a concentrator enhanced photovoltaic space array. A ray trace routine determines the concentrator intensity arriving at each solar cell. An iterative calculation determines the cell's operating temperature since cell temperature and cell efficiency are functions of one another. The end result of the iterative calculation is that the individual cell's power output is determined as a function of temperature and intensity. Circuit output is predicted by combining the individual cell outputs using the single diode model of a solar cell. Concentrated array characteristics such as uniformity of intensity and operating temperature at various points across the array are examined using computer modeling techniques. An illustrative example is given showing how the output of an array can be enhanced using solar concentration techniques.

  14. Continuum modeling of catastrophic collisions

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Aspaug, Erik; Melosh, H. J.

    1991-01-01

    A two dimensional hydrocode based on 2-D SALE was modified to include strength effects and fragmentation equations for fracture resulting from tensile stress in one dimension. Output from this code includes a complete fragmentation summary for each cell of the modeled object: fragment size (mass) distribution, vector velocities of particles, peak values of pressure and tensile stress, and peak strain rates associated with fragmentation. Contour plots showing pressure and temperature at given times within the object are also produced. By invoking axial symmetry, three dimensional events can be modeled such as zero impact parameter collisions between asteroids. The code was tested against the one dimensional model and the analytical solution for a linearly increasing tensile stress under constant strain rate.

  15. A Python Calculator for Supernova Remnant Evolution

    NASA Astrophysics Data System (ADS)

    Leahy, D. A.; Williams, J. E.

    2017-05-01

    A freely available Python code for modeling supernova remnant (SNR) evolution has been created. This software is intended for two purposes: to understand SNR evolution and to use in modeling observations of SNR for obtaining good estimates of SNR properties. It includes all phases for the standard path of evolution for spherically symmetric SNRs. In addition, alternate evolutionary models are available, including evolution in a cloudy ISM, the fractional energy-loss model, and evolution in a hot low-density ISM. The graphical interface takes in various parameters and produces outputs such as shock radius and velocity versus time, as well as SNR surface brightness profile and spectrum. Some interesting properties of SNR evolution are demonstrated using the program.

  16. STABCAR: A program for finding characteristic root systems having transcendental stability matrices

    NASA Technical Reports Server (NTRS)

    Adams, W. M., Jr.; Tiffany, S. H.; Newsom, J. R.; Peele, E. L.

    1984-01-01

    STABCAR can be used to determine the characteristic roots of flexible, actively controlled aircraft, including the effects of unsteady aerodynamics. A modal formulation and a transfer-matrix representation of the control system are employed. Operable in either a batch or an interactive mode, STABCAR can provide graphical or tabular output of the variation of the roots with velocity, density, altitude, dynamic pressure or feedback gains. Herein the mathematical model, program structure, input requirements, output capabilities, and a series of sample cases are detailed. STABCAR was written for use on CDC CYBER 175 equipment; modification would be required for operation on other machines.

  17. Computer input and output files associated with ground-water-flow simulations of the Albuquerque Basin, central New Mexico, 1901-94, with projections to 2020; (supplement one to U.S. Geological Survey Water-resources investigations report 94-4251)

    USGS Publications Warehouse

    Kernodle, J.M.

    1996-01-01

    This report presents the computer input files required to run the three-dimensional ground-water-flow model of the Albuquerque Basin, central New Mexico, documented in Kernodle and others (Kernodle, J.M., McAda, D.P., and Thorn, C.R., 1995, Simulation of ground-water flow in the Albuquerque Basin, central New Mexico, 1901-1994, with projections to 2020: U.S. Geological Survey Water-Resources Investigations Report 94-4251, 114 p.). Output files resulting from the computer simulations are included for reference.

  18. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations

    PubMed Central

    Fernandez, Fernando R.; Malerba, Paola; White, John A.

    2015-01-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances. PMID:25909971

  19. Non-linear Membrane Properties in Entorhinal Cortical Stellate Cells Reduce Modulation of Input-Output Responses by Voltage Fluctuations.

    PubMed

    Fernandez, Fernando R; Malerba, Paola; White, John A

    2015-04-01

    The presence of voltage fluctuations arising from synaptic activity is a critical component in models of gain control, neuronal output gating, and spike rate coding. The degree to which individual neuronal input-output functions are modulated by voltage fluctuations, however, is not well established across different cortical areas. Additionally, the extent and mechanisms of input-output modulation through fluctuations have been explored largely in simplified models of spike generation, and with limited consideration for the role of non-linear and voltage-dependent membrane properties. To address these issues, we studied fluctuation-based modulation of input-output responses in medial entorhinal cortical (MEC) stellate cells of rats, which express strong sub-threshold non-linear membrane properties. Using in vitro recordings, dynamic clamp and modeling, we show that the modulation of input-output responses by random voltage fluctuations in stellate cells is significantly limited. In stellate cells, a voltage-dependent increase in membrane resistance at sub-threshold voltages mediated by Na+ conductance activation limits the ability of fluctuations to elicit spikes. Similarly, in exponential leaky integrate-and-fire models using a shallow voltage-dependence for the exponential term that matches stellate cell membrane properties, a low degree of fluctuation-based modulation of input-output responses can be attained. These results demonstrate that fluctuation-based modulation of input-output responses is not a universal feature of neurons and can be significantly limited by subthreshold voltage-gated conductances.

  20. Emulation for probabilistic weather forecasting

    NASA Astrophysics Data System (ADS)

    Cornford, Dan; Barillec, Remi

    2010-05-01

    Numerical weather prediction models are typically very expensive to run due to their complexity and resolution. Characterising the sensitivity of the model to its initial condition and/or to its parameters requires numerous runs of the model, which is impractical for all but the simplest models. To produce probabilistic forecasts requires knowledge of the distribution of the model outputs, given the distribution over the inputs, where the inputs include the initial conditions, boundary conditions and model parameters. Such uncertainty analysis for complex weather prediction models seems a long way off, given current computing power, with ensembles providing only a partial answer. One possible way forward that we develop in this work is the use of statistical emulators. Emulators provide an efficient statistical approximation to the model (or simulator) while quantifying the uncertainty introduced. In the emulator framework, a Gaussian process is fitted to the simulator response as a function of the simulator inputs using some training data. The emulator is essentially an interpolator of the simulator output and the response in unobserved areas is dictated by the choice of covariance structure and parameters in the Gaussian process. Suitable parameters are inferred from the data in a maximum likelihood, or Bayesian framework. Once trained, the emulator allows operations such as sensitivity analysis or uncertainty analysis to be performed at a much lower computational cost. The efficiency of emulators can be further improved by exploiting the redundancy in the simulator output through appropriate dimension reduction techniques. We demonstrate this using both Principal Component Analysis on the model output and a new reduced-rank emulator in which an optimal linear projection operator is estimated jointly with other parameters, in the context of simple low order models, such as the Lorenz 40D system. We present the application of emulators to probabilistic weather forecasting, where the construction of the emulator training set replaces the traditional ensemble model runs. Thus the actual forecast distributions are computed using the emulator conditioned on the ‘ensemble runs' which are chosen to explore the plausible input space using relatively crude experimental design methods. One benefit here is that the ensemble does not need to be a sample from the true distribution of the input space, rather it should cover that input space in some sense. The probabilistic forecasts are computed using Monte Carlo methods sampling from the input distribution and using the emulator to produce the output distribution. Finally we discuss the limitations of this approach and briefly mention how we might use similar methods to learn the model error within a framework that incorporates a data assimilation like aspect, using emulators and learning complex model error representations. We suggest future directions for research in the area that will be necessary to apply the method to more realistic numerical weather prediction models.

  1. Optimal output fast feedback in two-time scale control of flexible arms

    NASA Technical Reports Server (NTRS)

    Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.

    1986-01-01

    Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.

  2. Bayesian model calibration of ramp compression experiments on Z

    NASA Astrophysics Data System (ADS)

    Brown, Justin; Hund, Lauren

    2017-06-01

    Bayesian model calibration (BMC) is a statistical framework to estimate inputs for a computational model in the presence of multiple uncertainties, making it well suited to dynamic experiments which must be coupled with numerical simulations to interpret the results. Often, dynamic experiments are diagnosed using velocimetry and this output can be modeled using a hydrocode. Several calibration issues unique to this type of scenario including the functional nature of the output, uncertainty of nuisance parameters within the simulation, and model discrepancy identifiability are addressed, and a novel BMC process is proposed. As a proof of concept, we examine experiments conducted on Sandia National Laboratories' Z-machine which ramp compressed tantalum to peak stresses of 250 GPa. The proposed BMC framework is used to calibrate the cold curve of Ta (with uncertainty), and we conclude that the procedure results in simple, fast, and valid inferences. Sandia National Laboratories is a multi-mission laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  3. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  4. Atmospheric Probe Model: Construction and Wind Tunnel Tests

    NASA Technical Reports Server (NTRS)

    Vogel, Jerald M.

    1998-01-01

    The material contained in this document represents a summary of the results of a low speed wind tunnel test program to determine the performance of an atmospheric probe at low speed. The probe configuration tested consists of a 2/3 scale model constructed from a combination of hard maple wood and aluminum stock. The model design includes approximately 130 surface static pressure taps. Additional hardware incorporated in the baseline model provides a mechanism for simulating external and internal trailing edge split flaps for probe flow control. Test matrix parameters include probe side slip angle, external/internal split flap deflection angle, and trip strip applications. Test output database includes surface pressure distributions on both inner and outer annular wings and probe center line velocity distributions from forward probe to aft probe locations.

  5. Comparative Assessment of Tactics to Improve Primary Frequency Response Without Curtailing Solar Output in High Photovoltaic Interconnection Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tan, Jin; Zhang, Yingchen; You, Shutang

    Power grid primary frequency response will be significantly impaired by Photovoltaic (PV) penetration increase because of the decrease in inertia and governor response. PV inertia and governor emulation requires reserving PV output and leads to solar energy waste. This paper exploits current grid resources and explores energy storage for primary frequency response under high PV penetration at the interconnection level. Based on the actual models of the U.S. Eastern Interconnection grid and the Texas grid, effects of multiple factors associated with primary frequency response, including the governor ratio, governor deadband, droop rate, fast load response. are assessed under high PVmore » penetration scenarios. In addition, performance of batteries and supercapacitors using different control strategies is studied in the two interconnections. The paper quantifies the potential of various resources to improve interconnection-level primary frequency response under high PV penetration without curtailing solar output.« less

  6. Life and dynamic capacity modeling for aircraft transmissions

    NASA Technical Reports Server (NTRS)

    Savage, Michael

    1991-01-01

    A computer program to simulate the dynamic capacity and life of parallel shaft aircraft transmissions is presented. Five basic configurations can be analyzed: single mesh, compound, parallel, reverted, and single plane reductions. In execution, the program prompts the user for the data file prefix name, takes input from a ASCII file, and writes its output to a second ASCII file with the same prefix name. The input data file includes the transmission configuration, the input shaft torque and speed, and descriptions of the transmission geometry and the component gears and bearings. The program output file describes the transmission, its components, their capabilities, locations, and loads. It also lists the dynamic capability, ninety percent reliability, and mean life of each component and the transmission as a system. Here, the program, its input and output files, and the theory behind the operation of the program are described.

  7. Development of a 402.5 MHz 140 kW Inductive Output Tube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    R. Lawrence Ives; Michael Read, Robert Jackson

    2012-05-09

    This report contains the results of Phase I of an SBIR to develop a Pulsed Inductive Output Tube (IOT) with 140 kW at 400 MHz for powering H-proton beams. A number of sources, including single beam and multiple beam klystrons, can provide this power, but the IOT provides higher efficiency. Efficiencies exceeding 70% are routinely achieved. The gain is typically limited to approximately 24 dB; however, the availability of highly efficient, solid state drivers reduces the significance of this limitation, particularly at lower frequencies. This program initially focused on developing a 402 MHz IOT; however, the DOE requirement for thismore » device was terminated during the program. The SBIR effort was refocused on improving the IOT design codes to more accurately simulate the time dependent behavior of the input cavity, electron gun, output cavity, and collector. Significant improvement was achieved in modeling capability and simulation accuracy.« less

  8. To publish or not to publish? On the aggregation and drivers of research performance

    PubMed Central

    De Witte, Kristof

    2010-01-01

    This paper presents a methodology to aggregate multidimensional research output. Using a tailored version of the non-parametric Data Envelopment Analysis model, we account for the large heterogeneity in research output and the individual researcher preferences by endogenously weighting the various output dimensions. The approach offers three important advantages compared to the traditional approaches: (1) flexibility in the aggregation of different research outputs into an overall evaluation score; (2) a reduction of the impact of measurement errors and a-typical observations; and (3) a correction for the influences of a wide variety of factors outside the evaluated researcher’s control. As a result, research evaluations are more effective representations of actual research performance. The methodology is illustrated on a data set of all faculty members at a large polytechnic university in Belgium. The sample includes questionnaire items on the motivation and perception of the researcher. This allows us to explore whether motivation and background characteristics (such as age, gender, retention, etc.,) of the researchers explain variations in measured research performance. PMID:21057573

  9. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  10. Global robust output regulation control for cascaded nonlinear systems using the internal model principle

    NASA Astrophysics Data System (ADS)

    Yu, Jiang-Bo; Zhao, Yan; Wu, Yu-Qiang

    2014-04-01

    This article considers the global robust output regulation problem via output feedback for a class of cascaded nonlinear systems with input-to-state stable inverse dynamics. The system uncertainties depend not only on the measured output but also all the unmeasurable states. By introducing an internal model, the output regulation problem is converted into a stabilisation problem for an appropriately augmented system. The designed dynamic controller could achieve the global asymptotic tracking control for a class of time-varying reference signals for the system output while keeping all other closed-loop signals bounded. It is of interest to note that the developed control approach can be applied to the speed tracking control of the fan speed control system. The simulation results demonstrate its effectiveness.

  11. Light extraction in planar light-emitting diode with nonuniform current injection: model and simulation.

    PubMed

    Khmyrova, Irina; Watanabe, Norikazu; Kholopova, Julia; Kovalchuk, Anatoly; Shapoval, Sergei

    2014-07-20

    We develop an analytical and numerical model for performing simulation of light extraction through the planar output interface of the light-emitting diodes (LEDs) with nonuniform current injection. Spatial nonuniformity of injected current is a peculiar feature of the LEDs in which top metal electrode is patterned as a mesh in order to enhance the output power of light extracted through the top surface. Basic features of the model are the bi-plane computation domain, related to other areas of numerical grid (NG) cells in these two planes, representation of light-generating layer by an ensemble of point light sources, numerical "collection" of light photons from the area limited by acceptance circle and adjustment of NG-cell areas in the computation procedure by the angle-tuned aperture function. The developed model and procedure are used to simulate spatial distributions of the output optical power as well as the total output power at different mesh pitches. The proposed model and simulation strategy can be very efficient in evaluation of the output optical performance of LEDs with periodical or symmetrical configuration of the electrodes.

  12. Measurements and Modeling of Total Solar Irradiance in X-class Solar Flares

    NASA Technical Reports Server (NTRS)

    Moore, Christopher S.; Chamberlin, Phillip Clyde; Hock, Rachel

    2014-01-01

    The Total Irradiance Monitor (TIM) from NASA's SOlar Radiation and Climate Experiment can detect changes in the total solar irradiance (TSI) to a precision of 2 ppm, allowing observations of variations due to the largest X-class solar flares for the first time. Presented here is a robust algorithm for determining the radiative output in the TIM TSI measurements, in both the impulsive and gradual phases, for the four solar flares presented in Woods et al., as well as an additional flare measured on 2006 December 6. The radiative outputs for both phases of these five flares are then compared to the vacuum ultraviolet (VUV) irradiance output from the Flare Irradiance Spectral Model (FISM) in order to derive an empirical relationship between the FISM VUV model and the TIM TSI data output to estimate the TSI radiative output for eight other X-class flares. This model provides the basis for the bolometric energy estimates for the solar flares analyzed in the Emslie et al. study.

  13. Estimating the Health Effects of Greenhouse Gas Mitigation Strategies: Addressing Parametric, Model, and Valuation Challenges

    PubMed Central

    Hess, Jeremy J.; Ebi, Kristie L.; Markandya, Anil; Balbus, John M.; Wilkinson, Paul; Haines, Andy; Chalabi, Zaid

    2014-01-01

    Background: Policy decisions regarding climate change mitigation are increasingly incorporating the beneficial and adverse health impacts of greenhouse gas emission reduction strategies. Studies of such co-benefits and co-harms involve modeling approaches requiring a range of analytic decisions that affect the model output. Objective: Our objective was to assess analytic decisions regarding model framework, structure, choice of parameters, and handling of uncertainty when modeling health co-benefits, and to make recommendations for improvements that could increase policy uptake. Methods: We describe the assumptions and analytic decisions underlying models of mitigation co-benefits, examining their effects on modeling outputs, and consider tools for quantifying uncertainty. Discussion: There is considerable variation in approaches to valuation metrics, discounting methods, uncertainty characterization and propagation, and assessment of low-probability/high-impact events. There is also variable inclusion of adverse impacts of mitigation policies, and limited extension of modeling domains to include implementation considerations. Going forward, co-benefits modeling efforts should be carried out in collaboration with policy makers; these efforts should include the full range of positive and negative impacts and critical uncertainties, as well as a range of discount rates, and should explicitly characterize uncertainty. We make recommendations to improve the rigor and consistency of modeling of health co-benefits. Conclusion: Modeling health co-benefits requires systematic consideration of the suitability of model assumptions, of what should be included and excluded from the model framework, and how uncertainty should be treated. Increased attention to these and other analytic decisions has the potential to increase the policy relevance and application of co-benefits modeling studies, potentially helping policy makers to maximize mitigation potential while simultaneously improving health. Citation: Remais JV, Hess JJ, Ebi KL, Markandya A, Balbus JM, Wilkinson P, Haines A, Chalabi Z. 2014. Estimating the health effects of greenhouse gas mitigation strategies: addressing parametric, model, and valuation challenges. Environ Health Perspect 122:447–455; http://dx.doi.org/10.1289/ehp.1306744 PMID:24583270

  14. Wind tunnel measurements of the power output variability and unsteady loading in a micro wind farm model

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael; Meneveau, Charles; Meyers, Johan

    2015-11-01

    To optimize wind farm layouts for a maximum power output and wind turbine lifetime, mean power output measurements in wind tunnel studies are not sufficient. Instead, detailed temporal information about the power output and unsteady loading from every single wind turbine in the wind farm is needed. A very small porous disc model with a realistic thrust coefficient of 0.75 - 0.85, was designed. The model is instrumented with a strain gage, allowing measurements of the thrust force, incoming velocity and power output with a frequency response up to the natural frequency of the model. This is shown by reproducing the -5/3 spectrum from the incoming flow. Thanks to its small size and compact instrumentation, the model allows wind tunnel studies of large wind turbine arrays with detailed temporal information from every wind turbine. Translating to field conditions with a length-scale ratio of 1:3,000 the frequencies studied from the data reach from 10-4 Hz up to about 6 .10-2 Hz. The model's capabilities are demonstrated with a large wind farm measurement consisting of close to 100 instrumented models. A high correlation is found between the power outputs of stream wise aligned wind turbines, which is in good agreement with results from prior LES simulations. Work supported by ERC (ActiveWindFarms, grant no. 306471) and by NSF (grants CBET-113380 and IIA-1243482, the WINDINSPIRE project).

  15. A spectral method for spatial downscaling | Science Inventory ...

    EPA Pesticide Factsheets

    Complex computer models play a crucial role in air quality research. These models are used to evaluate potential regulatory impacts of emission control strategies and to estimate air quality in areas without monitoring data. For both of these purposes, it is important to calibrate model output with monitoring data to adjust for model biases and improve spatial prediction. In this paper, we propose a new spectral method to study and exploit complex relationships between model output and monitoring data. Spectral methods allow us to estimate the relationship between model output and monitoring data separately at different spatial scales, and to use model output for prediction only at the appropriate scales. The proposed method is computationally efficient and can be implemented using standard software. We apply the method to compare Community Multiscale Air Quality (CMAQ) model output with ozone measurements in the United States in July, 2005. We find that CMAQ captures large-scale spatial trends, but has low correlation with the monitoring data at small spatial scales. The National Exposure Research Laboratory′s (NERL′s)Atmospheric Modeling Division (AMAD) conducts research in support of EPA′s mission to protect human health and the environment. AMAD′s research program is engaged in developing and evaluating predictive atmospheric models on all spatial and temporal scales for forecasting the Nation′s air quality and for assessing ch

  16. Atmospheric numerical modeling resource enhancement and model convective parameterization/scale interaction studies

    NASA Technical Reports Server (NTRS)

    Cushman, Paula P.

    1993-01-01

    Research will be undertaken in this contract in the area of Modeling Resource and Facilities Enhancement to include computer, technical and educational support to NASA investigators to facilitate model implementation, execution and analysis of output; to provide facilities linking USRA and the NASA/EADS Computer System as well as resident work stations in ESAD; and to provide a centralized location for documentation, archival and dissemination of modeling information pertaining to NASA's program. Additional research will be undertaken in the area of Numerical Model Scale Interaction/Convective Parameterization Studies to include implementation of the comparison of cloud and rain systems and convective-scale processes between the model simulations and what was observed; and to incorporate the findings of these and related research findings in at least two refereed journal articles.

  17. Efficiency of static core turn-off in a system-on-a-chip with variation

    DOEpatents

    Cher, Chen-Yong; Coteus, Paul W; Gara, Alan; Kursun, Eren; Paulsen, David P; Schuelke, Brian A; Sheets, II, John E; Tian, Shurong

    2013-10-29

    A processor-implemented method for improving efficiency of a static core turn-off in a multi-core processor with variation, the method comprising: conducting via a simulation a turn-off analysis of the multi-core processor at the multi-core processor's design stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's design stage includes a first output corresponding to a first multi-core processor core to turn off; conducting a turn-off analysis of the multi-core processor at the multi-core processor's testing stage, wherein the turn-off analysis of the multi-core processor at the multi-core processor's testing stage includes a second output corresponding to a second multi-core processor core to turn off; comparing the first output and the second output to determine if the first output is referring to the same core to turn off as the second output; outputting a third output corresponding to the first multi-core processor core if the first output and the second output are both referring to the same core to turn off.

  18. Computer code for off-design performance analysis of radial-inflow turbines with rotor blade sweep

    NASA Technical Reports Server (NTRS)

    Meitner, P. L.; Glassman, A. J.

    1983-01-01

    The analysis procedure of an existing computer program was extended to include rotor blade sweep, to model the flow more accurately at the rotor exit, and to provide more detail to the loss model. The modeling changes are described and all analysis equations and procedures are presented. Program input and output are described and are illustrated by an example problem. Results obtained from this program and from a previous program are compared with experimental data.

  19. An analytical procedure and automated computer code used to design model nozzles which meet MSFC base pressure similarity parameter criteria. [space shuttle

    NASA Technical Reports Server (NTRS)

    Sulyma, P. R.

    1980-01-01

    Fundamental equations and similarity definition and application are described as well as the computational steps of a computer program developed to design model nozzles for wind tunnel tests conducted to define power-on aerodynamic characteristics of the space shuttle over a range of ascent trajectory conditions. The computer code capabilities, a user's guide for the model nozzle design program, and the output format are examined. A program listing is included.

  20. Atmospheric model development in support of SEASAT. Volume 2: Analysis models

    NASA Technical Reports Server (NTRS)

    Langland, R. A.

    1977-01-01

    As part of the SEASAT program of NASA, two sets of analysis programs were developed for the Jet Propulsion Laboratory. One set of programs produce 63 x 63 horizontal mesh analyses on a polar stereographic grid. The other set produces 187 x 187 third mesh analyses. The parameters analyzed include sea surface temperature, sea level pressure and twelve levels of upper air temperature, height and wind analyses. The analysis output is used to initialize the primitive equation forecast models.

  1. Identification of quasi-steady compressor characteristics from transient data

    NASA Technical Reports Server (NTRS)

    Nunes, K. B.; Rock, S. M.

    1984-01-01

    The principal goal was to demonstrate that nonlinear compressor map parameters, which govern an in-stall response, can be identified from test data using parameter identification techniques. The tasks included developing and then applying an identification procedure to data generated by NASA LeRC on a hybrid computer. Two levels of model detail were employed. First was a lumped compressor rig model; second was a simplified turbofan model. The main outputs are the tools and procedures generated to accomplish the identification.

  2. Evaluating the effectiveness of intercultural teachers.

    PubMed

    Cox, Kathleen

    2011-01-01

    With globalization and major immigration flows, intercultural teaching encounters are likely to increase, along with the need to assure intercultural teaching effectiveness.Thus, the purpose of this article is to present a conceptual framework for nurse educators to consider when anticipating an intercultural teaching experience. Kirkpatrick's and Bushnell's models provide a basis for the conceptual framework. Major concepts of the model include input, process, output, and outcome.The model may possibly be used to guide future research to determine which variables are most influential in explaining intercultural teaching effectiveness.

  3. User's guide for a computer program to analyze the LRC 16 ft transonic dynamics tunnel cable mount system

    NASA Technical Reports Server (NTRS)

    Barbero, P.; Chin, J.

    1973-01-01

    The theoretical derivation of the set of equations is discussed which is applicable to modeling the dynamic characteristics of aeroelastically-scaled models flown on the two-cable mount system in a 16 ft transonic dynamics tunnel. The computer program provided for the analysis is also described. The program calculates model trim conditions as well as 3 DOF longitudinal and lateral/directional dynamic conditions for various flying cable and snubber cable configurations. Sample input and output are included.

  4. SnopViz, an interactive snow profile visualization tool

    NASA Astrophysics Data System (ADS)

    Fierz, Charles; Egger, Thomas; gerber, Matthias; Bavay, Mathias; Techel, Frank

    2016-04-01

    SnopViz is a visualization tool for both simulation outputs of the snow-cover model SNOWPACK and observed snow profiles. It has been designed to fulfil the needs of operational services (Swiss Avalanche Warning Service, Avalanche Canada) as well as offer the flexibility required to satisfy the specific needs of researchers. This JavaScript application runs on any modern browser and does not require an active Internet connection. The open source code is available for download from models.slf.ch where examples can also be run. Both the SnopViz library and the SnopViz User Interface will become a full replacement of the current research visualization tool SN_GUI for SNOWPACK. The SnopViz library is a stand-alone application that parses the provided input files, for example, a single snow profile (CAAML file format) or multiple snow profiles as output by SNOWPACK (PRO file format). A plugin architecture allows for handling JSON objects (JavaScript Object Notation) as well and plugins for other file formats may be added easily. The outputs are provided either as vector graphics (SVG) or JSON objects. The SnopViz User Interface (UI) is a browser based stand-alone interface. It runs in every modern browser, including IE, and allows user interaction with the graphs. SVG, the XML based standard for vector graphics, was chosen because of its easy interaction with JS and a good software support (Adobe Illustrator, Inkscape) to manipulate graphs outside SnopViz for publication purposes. SnopViz provides new visualization for SNOWPACK timeline output as well as time series input and output. The actual output format for SNOWPACK timelines was retained while time series are read from SMET files, a file format used in conjunction with the open source data handling code MeteoIO. Finally, SnopViz is able to render single snow profiles, either observed or modelled, that are provided as CAAML-file. This file format (caaml.org/Schemas/V5.0/Profiles/SnowProfileIACS) is an international standard to exchange snow profile data. It is supported by the International Association of Cryospheric Sciences (IACS) and was developed in collaboration with practitioners (Avalanche Canada).

  5. Approximate Optimal Control as a Model for Motor Learning

    ERIC Educational Resources Information Center

    Berthier, Neil E.; Rosenstein, Michael T.; Barto, Andrew G.

    2005-01-01

    Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of…

  6. Use of Advanced Meteorological Model Output for Coastal Ocean Modeling in Puget Sound

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Zhaoqing; Khangaonkar, Tarang; Wang, Taiping

    2011-06-01

    It is a great challenge to specify meteorological forcing in estuarine and coastal circulation modeling using observed data because of the lack of complete datasets. As a result of this limitation, water temperature is often not simulated in estuarine and coastal modeling, with the assumption that density-induced currents are generally dominated by salinity gradients. However, in many situations, temperature gradients could be sufficiently large to influence the baroclinic motion. In this paper, we present an approach to simulate water temperature using outputs from advanced meteorological models. This modeling approach was applied to simulate annual variations of water temperatures of Pugetmore » Sound, a fjordal estuary in the Pacific Northwest of USA. Meteorological parameters from North American Region Re-analysis (NARR) model outputs were evaluated with comparisons to observed data at real-time meteorological stations. Model results demonstrated that NARR outputs can be used to drive coastal ocean models for realistic simulations of long-term water-temperature distributions in Puget Sound. Model results indicated that the net flux from NARR can be further improved with the additional information from real-time observations.« less

  7. Simulation of deleterious processes in a static-cell diode pumped alkali laser

    NASA Astrophysics Data System (ADS)

    Oliker, Benjamin Q.; Haiducek, John D.; Hostutler, David A.; Pitz, Greg A.; Rudolph, Wolfgang; Madden, Timothy J.

    2014-02-01

    The complex interactions in a diode pumped alkali laser (DPAL) gain cell provide opportunities for multiple deleterious processes to occur. Effects that may be attributable to deleterious processes have been observed experimentally in a cesium static-cell DPAL at the United States Air Force Academy [B.V. Zhdanov, J. Sell, R.J. Knize, "Multiple laser diode array pumped Cs laser with 48 W output power," Electronics Letters, 44, 9 (2008)]. The power output in the experiment was seen to go through a "roll-over"; the maximum power output was obtained with about 70 W of pump power, then power output decreased as the pump power was increased beyond this point. Research to determine the deleterious processes that caused this result has been done at the Air Force Research Laboratory utilizing physically detailed simulation. The simulations utilized coupled computational fluid dynamics (CFD) and optics solvers, which were three-dimensional and time-dependent. The CFD code used a cell-centered, conservative, finite-volume discretization of the integral form of the Navier-Stokes equations. It included thermal energy transport and mass conservation, which accounted for chemical reactions and state kinetics. Optical models included pumping, lasing, and fluorescence. The deleterious effects investigated were: alkali number density decrease in high temperature regions, convective flow, pressure broadening and shifting of the absorption lineshape including hyperfine structure, radiative decay, quenching, energy pooling, off-resonant absorption, Penning ionization, photoionization, radiative recombination, three-body recombination due to free electron and buffer gas collisions, ambipolar diffusion, thermal aberration, dissociative recombination, multi-photon ionization, alkali-hydrocarbon reactions, and electron impact ionization.

  8. Using quantum theory to simplify input-output processes

    NASA Astrophysics Data System (ADS)

    Thompson, Jayne; Garner, Andrew J. P.; Vedral, Vlatko; Gu, Mile

    2017-02-01

    All natural things process and transform information. They receive environmental information as input, and transform it into appropriate output responses. Much of science is dedicated to building models of such systems-algorithmic abstractions of their input-output behavior that allow us to simulate how such systems can behave in the future, conditioned on what has transpired in the past. Here, we show that classical models cannot avoid inefficiency-storing past information that is unnecessary for correct future simulation. We construct quantum models that mitigate this waste, whenever it is physically possible to do so. This suggests that the complexity of general input-output processes depends fundamentally on what sort of information theory we use to describe them.

  9. Input-output model for MACCS nuclear accident impacts estimation¹

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Outkin, Alexander V.; Bixler, Nathan E.; Vargas, Vanessa N

    Since the original economic model for MACCS was developed, better quality economic data (as well as the tools to gather and process it) and better computational capabilities have become available. The update of the economic impacts component of the MACCS legacy model will provide improved estimates of business disruptions through the use of Input-Output based economic impact estimation. This paper presents an updated MACCS model, bases on Input-Output methodology, in which economic impacts are calculated using the Regional Economic Accounting analysis tool (REAcct) created at Sandia National Laboratories. This new GDP-based model allows quick and consistent estimation of gross domesticmore » product (GDP) losses due to nuclear power plant accidents. This paper outlines the steps taken to combine the REAcct Input-Output-based model with the MACCS code, describes the GDP loss calculation, and discusses the parameters and modeling assumptions necessary for the estimation of long-term effects of nuclear power plant accidents.« less

  10. Surrogate modelling for the prediction of spatial fields based on simultaneous dimensionality reduction of high-dimensional input/output spaces.

    PubMed

    Crevillén-García, D

    2018-04-01

    Time-consuming numerical simulators for solving groundwater flow and dissolution models of physico-chemical processes in deep aquifers normally require some of the model inputs to be defined in high-dimensional spaces in order to return realistic results. Sometimes, the outputs of interest are spatial fields leading to high-dimensional output spaces. Although Gaussian process emulation has been satisfactorily used for computing faithful and inexpensive approximations of complex simulators, these have been mostly applied to problems defined in low-dimensional input spaces. In this paper, we propose a method for simultaneously reducing the dimensionality of very high-dimensional input and output spaces in Gaussian process emulators for stochastic partial differential equation models while retaining the qualitative features of the original models. This allows us to build a surrogate model for the prediction of spatial fields in such time-consuming simulators. We apply the methodology to a model of convection and dissolution processes occurring during carbon capture and storage.

  11. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  12. Emulation: A fast stochastic Bayesian method to eliminate model space

    NASA Astrophysics Data System (ADS)

    Roberts, Alan; Hobbs, Richard; Goldstein, Michael

    2010-05-01

    Joint inversion of large 3D datasets has been the goal of geophysicists ever since the datasets first started to be produced. There are two broad approaches to this kind of problem, traditional deterministic inversion schemes and more recently developed Bayesian search methods, such as MCMC (Markov Chain Monte Carlo). However, using both these kinds of schemes has proved prohibitively expensive, both in computing power and time cost, due to the normally very large model space which needs to be searched using forward model simulators which take considerable time to run. At the heart of strategies aimed at accomplishing this kind of inversion is the question of how to reliably and practicably reduce the size of the model space in which the inversion is to be carried out. Here we present a practical Bayesian method, known as emulation, which can address this issue. Emulation is a Bayesian technique used with considerable success in a number of technical fields, such as in astronomy, where the evolution of the universe has been modelled using this technique, and in the petroleum industry where history matching is carried out of hydrocarbon reservoirs. The method of emulation involves building a fast-to-compute uncertainty-calibrated approximation to a forward model simulator. We do this by modelling the output data from a number of forward simulator runs by a computationally cheap function, and then fitting the coefficients defining this function to the model parameters. By calibrating the error of the emulator output with respect to the full simulator output, we can use this to screen out large areas of model space which contain only implausible models. For example, starting with what may be considered a geologically reasonable prior model space of 10000 models, using the emulator we can quickly show that only models which lie within 10% of that model space actually produce output data which is plausibly similar in character to an observed dataset. We can thus much more tightly constrain the input model space for a deterministic inversion or MCMC method. By using this technique jointly on several datasets (specifically seismic, gravity, and magnetotelluric (MT) describing the same region), we can include in our modelling uncertainties in the data measurements, the relationships between the various physical parameters involved, as well as the model representation uncertainty, and at the same time further reduce the range of plausible models to several percent of the original model space. Being stochastic in nature, the output posterior parameter distributions also allow our understanding of/beliefs about a geological region can be objectively updated, with full assessment of uncertainties, and so the emulator is also an inversion-type tool in it's own right, with the advantage (as with any Bayesian method) that our uncertainties from all sources (both data and model) can be fully evaluated.

  13. National Centers for Environmental Prediction

    Science.gov Websites

    /NDAS Output Fields (contents, format, grid specs, output frequency, archive): The NWP model The horizontal output grid The vertical grid Access to fields Anonymous FTP Access Permanent Tape Archive

  14. W3MAMCAT: a world wide web based tool for mammillary and catenary compartmental modeling and expert system distinguishability.

    PubMed

    Russell, Solomon; Distefano, Joseph J

    2006-07-01

    W(3)MAMCAT is a new web-based and interactive system for building and quantifying the parameters or parameter ranges of n-compartment mammillary and catenary model structures, with input and output in the first compartment, from unstructured multiexponential (sum-of-n-exponentials) models. It handles unidentifiable as well as identifiable models and, as such, provides finite parameter interval solutions for unidentifiable models, whereas direct parameter search programs typically do not. It also tutorially develops the theory of model distinguishability for same order mammillary versus catenary models, as did its desktop application predecessor MAMCAT+. This includes expert system analysis for distinguishing mammillary from catenary structures, given input and output in similarly numbered compartments. W(3)MAMCAT provides for universal deployment via the internet and enhanced application error checking. It uses supported Microsoft technologies to form an extensible application framework for maintaining a stable and easily updatable application. Most important, anybody, anywhere, is welcome to access it using Internet Explorer 6.0 over the internet for their teaching or research needs. It is available on the Biocybernetics Laboratory website at UCLA: www.biocyb.cs.ucla.edu.

  15. OSSOS: X. How to use a Survey Simulator: Statistical Testing of Dynamical Models Against the Real Kuiper Belt

    NASA Astrophysics Data System (ADS)

    Lawler, Samantha M.; Kavelaars, J. J.; Alexandersen, Mike; Bannister, Michele T.; Gladman, Brett; Petit, Jean-Marc; Shankman, Cory

    2018-05-01

    All surveys include observational biases, which makes it impossible to directly compare properties of discovered trans-Neptunian Objects (TNOs) with dynamical models. However, by carefully keeping track of survey pointings on the sky, detection limits, tracking fractions, and rate cuts, the biases from a survey can be modelled in Survey Simulator software. A Survey Simulator takes an intrinsic orbital model (from, for example, the output of a dynamical Kuiper belt emplacement simulation) and applies the survey biases, so that the biased simulated objects can be directly compared with real discoveries. This methodology has been used with great success in the Outer Solar System Origins Survey (OSSOS) and its predecessor surveys. In this chapter, we give four examples of ways to use the OSSOS Survey Simulator to gain knowledge about the true structure of the Kuiper Belt. We demonstrate how to statistically compare different dynamical model outputs with real TNO discoveries, how to quantify detection biases within a TNO population, how to measure intrinsic population sizes, and how to use upper limits from non-detections. We hope this will provide a framework for dynamical modellers to statistically test the validity of their models.

  16. Evaluating significance in linear mixed-effects models in R.

    PubMed

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  17. The effects of carbon tax on the Oregon economy and state greenhouse gas emissions

    NASA Astrophysics Data System (ADS)

    Rice, A. L.; Butenhoff, C. L.; Renfro, J.; Liu, J.

    2014-12-01

    Of the numerous mechanisms to mitigate greenhouse gas emissions on statewide, regional or national scales in the United States, a tax on carbon is perhaps one of the simplest. By taxing emissions directly, the costs of carbon emissions are incorporated into decision-making processes of market actors including consumers, energy suppliers and policy makers. A carbon tax also internalizes the social costs of climate impacts. In structuring carbon tax revenues to reduce corporate and personal income taxes, the negative incentives created by distortionary income taxes can be reduced or offset entirely. In 2008, the first carbon tax in North America across economic sectors was implemented in British Columbia through such a revenue-neutral program. In this work, we investigate the economic and environmental effects of a carbon tax in the state of Oregon with the goal of informing the state legislature, stakeholders and the public. The study investigates 70 different economic sectors in the Oregon economy and six geographical regions of the state. The economic model is built upon the Carbon Tax Analysis Model (C-TAM) to provide price changes in fuel with data from: the Energy Information Agency National Energy Modeling System (EIA-NEMS) Pacific Region Module which provides Oregon-specific energy forecasts; and fuel price increases imposed at different carbon fees based on fuel-specific carbon content and current and projected regional-specific electricity fuel mixes. CTAM output is incorporated into the Regional Economic Model (REMI) which is used to dynamically forecast economic impacts by region and industry sector including: economic output, employment, wages, fiscal effects and equity. Based on changes in economic output and fuel demand, we further project changes in greenhouse gas emissions resulting from economic activity and calculate revenue generated through a carbon fee. Here, we present results of this modeling effort under different scenarios of carbon fee and avenues for revenue repatriation.

  18. Numerical investigation of impact of relative humidity on droplet accumulation and film cooling on compressor blades

    NASA Astrophysics Data System (ADS)

    Bugarin, Luz Irene

    During the summer, high inlet temperatures affect the power output of gas turbine systems. Evaporative coolers have gained popularity as an inlet cooling method for these systems. Wet compression has been one of the common evaporative cooling methods implemented to increase power output of gas turbine systems due to its simple installation and low cost. This process involves injection of water droplets into the continuous phase of compressor to reduce the temperature of the flow entering the compressor and in turn increase the power output of the whole gas turbine system. This study focused on a single stage rotor-stator compressor model with varying inlet temperature between 300K and 320K, as well as relative humidity between 0% and 100%. The simulations are carried out using the commercial CFD tool ANSYS: FLUENT. The study modeled the interaction between the two phases including mass and heat transfer, given different inlet relative humidity (RH) and temperature conditions. The Reynolds Averaged Navier-Stokes (RANS) equations with k-epsilon turbulence model were applied as well as the droplet coalescence and droplet breakup model considered in the simulation. Sliding mesh theory was implemented to simulate the compressor movement in 2-D. The interaction between the blade and droplets were modeled to address all possible interactions; which include: stick spread, splash, or rebound and compared to an interaction of only reflect. The goal of this study is to quantify the relation between RH, inlet temperature, overall heat transfer coefficient, and the heat transferred from the droplets to the blades surface. The result of this study lead to further proof that wet compression yields higher pressure ratios and lower temperatures in the domain under all of the cases. Additionally, droplet-wall interaction has an interesting effect on the heat transfer coefficient at the compressor blades.

  19. SMP: A solid modeling program version 2.0

    NASA Technical Reports Server (NTRS)

    Randall, D. P.; Jones, K. H.; Vonofenheim, W. H.; Gates, R. L.; Matthews, C. G.

    1986-01-01

    The Solid Modeling Program (SMP) provides the capability to model complex solid objects through the composition of primitive geometric entities. In addition to the construction of solid models, SMP has extensive facilities for model editing, display, and analysis. The geometric model produced by the software system can be output in a format compatible with existing analysis programs such as PATRAN-G. The present version of the SMP software supports six primitives: boxes, cones, spheres, paraboloids, tori, and trusses. The details for creating each of the major primitive types is presented. The analysis capabilities of SMP, including interfaces to existing analysis programs, are discussed.

  20. Design of double fuzzy clustering-driven context neural networks.

    PubMed

    Kim, Eun-Hu; Oh, Sung-Kwun; Pedrycz, Witold

    2018-08-01

    In this study, we introduce a novel category of double fuzzy clustering-driven context neural networks (DFCCNNs). The study is focused on the development of advanced design methodologies for redesigning the structure of conventional fuzzy clustering-based neural networks. The conventional fuzzy clustering-based neural networks typically focus on dividing the input space into several local spaces (implied by clusters). In contrast, the proposed DFCCNNs take into account two distinct local spaces called context and cluster spaces, respectively. Cluster space refers to the local space positioned in the input space whereas context space concerns a local space formed in the output space. Through partitioning the output space into several local spaces, each context space is used as the desired (target) local output to construct local models. To complete this, the proposed network includes a new context layer for reasoning about context space in the output space. In this sense, Fuzzy C-Means (FCM) clustering is useful to form local spaces in both input and output spaces. The first one is used in order to form clusters and train weights positioned between the input and hidden layer, whereas the other one is applied to the output space to form context spaces. The key features of the proposed DFCCNNs can be enumerated as follows: (i) the parameters between the input layer and hidden layer are built through FCM clustering. The connections (weights) are specified as constant terms being in fact the centers of the clusters. The membership functions (represented through the partition matrix) produced by the FCM are used as activation functions located at the hidden layer of the "conventional" neural networks. (ii) Following the hidden layer, a context layer is formed to approximate the context space of the output variable and each node in context layer means individual local model. The outputs of the context layer are specified as a combination of both weights formed as linear function and the outputs of the hidden layer. The weights are updated using the least square estimation (LSE)-based method. (iii) At the output layer, the outputs of context layer are decoded to produce the corresponding numeric output. At this time, the weighted average is used and the weights are also adjusted with the use of the LSE scheme. From the viewpoint of performance improvement, the proposed design methodologies are discussed and experimented with the aid of benchmark machine learning datasets. Through the experiments, it is shown that the generalization abilities of the proposed DFCCNNs are better than those of the conventional FCNNs reported in the literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  1. TUTORIAL: Validating biorobotic models

    NASA Astrophysics Data System (ADS)

    Webb, Barbara

    2006-09-01

    Some issues in neuroscience can be addressed by building robot models of biological sensorimotor systems. What we can conclude from building models or simulations, however, is determined by a number of factors in addition to the central hypothesis we intend to test. These include the way in which the hypothesis is represented and implemented in simulation, how the simulation output is interpreted, how it is compared to the behaviour of the biological system, and the conditions under which it is tested. These issues will be illustrated by discussing a series of robot models of cricket phonotaxis behaviour. .

  2. System, method and apparatus for conducting a keyterm search

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.

  3. System, method and apparatus for conducting a phrase search

    NASA Technical Reports Server (NTRS)

    McGreevy, Michael W. (Inventor)

    2004-01-01

    A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.

  4. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  5. Real-time implementation of biofidelic SA1 model for tactile feedback.

    PubMed

    Russell, A F; Armiger, R S; Vogelstein, R J; Bensmaia, S J; Etienne-Cummings, R

    2009-01-01

    In order for the functionality of an upper-limb prosthesis to approach that of a real limb it must be able to, accurately and intuitively, convey sensory feedback to the limb user. This paper presents results of the real-time implementation of a 'biofidelic' model that describes mechanotransduction in Slowly Adapting Type 1 (SA1) afferent fibers. The model accurately predicts the timing of action potentials for arbitrary force or displacement stimuli and its output can be used as stimulation times for peripheral nerve stimulation by a neuroprosthetic device. The model performance was verified by comparing the predicted action potential (or spike) outputs against measured spike outputs for different vibratory stimuli. Furthermore experiments were conducted to show that, like real SA1 fibers, the model's spike rate varies according to input pressure and that a periodic 'tapping' stimulus evokes periodic spike outputs.

  6. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. Hemodynamic response to exercise and head-up tilt of patients implanted with a rotary blood pump: a computational modeling study.

    PubMed

    Lim, Einly; Salamonsen, Robert Francis; Mansouri, Mahdi; Gaddum, Nicholas; Mason, David Glen; Timms, Daniel L; Stevens, Michael Charles; Fraser, John; Akmeliawati, Rini; Lovell, Nigel Hamilton

    2015-02-01

    The present study investigates the response of implantable rotary blood pump (IRBP)-assisted patients to exercise and head-up tilt (HUT), as well as the effect of alterations in the model parameter values on this response, using validated numerical models. Furthermore, we comparatively evaluate the performance of a number of previously proposed physiologically responsive controllers, including constant speed, constant flow pulsatility index (PI), constant average pressure difference between the aorta and the left atrium, constant average differential pump pressure, constant ratio between mean pump flow and pump flow pulsatility (ratioP I or linear Starling-like control), as well as constant left atrial pressure ( P l a ¯ ) control, with regard to their ability to increase cardiac output during exercise while maintaining circulatory stability upon HUT. Although native cardiac output increases automatically during exercise, increasing pump speed was able to further improve total cardiac output and reduce elevated filling pressures. At the same time, reduced venous return associated with upright posture was not shown to induce left ventricular (LV) suction. Although P l a ¯ control outperformed other control modes in its ability to increase cardiac output during exercise, it caused a fall in the mean arterial pressure upon HUT, which may cause postural hypotension or patient discomfort. To the contrary, maintaining constant average pressure difference between the aorta and the left atrium demonstrated superior performance in both exercise and HUT scenarios. Due to their strong dependence on the pump operating point, PI and ratioPI control performed poorly during exercise and HUT. Our simulation results also highlighted the importance of the baroreflex mechanism in determining the response of the IRBP-assisted patients to exercise and postural changes, where desensitized reflex response attenuated the percentage increase in cardiac output during exercise and substantially reduced the arterial pressure upon HUT. Copyright © 2014 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  8. User's manual for rocket combustor interactive design (ROCCID) and analysis computer program. Volume 2: Appendixes A-K

    NASA Technical Reports Server (NTRS)

    Muss, J. A.; Nguyen, T. V.; Johnson, C. W.

    1991-01-01

    The appendices A-K to the user's manual for the rocket combustor interactive design (ROCCID) computer program are presented. This includes installation instructions, flow charts, subroutine model documentation, and sample output files. The ROCCID program, written in Fortran 77, provides a standardized methodology using state of the art codes and procedures for the analysis of a liquid rocket engine combustor's steady state combustion performance and combustion stability. The ROCCID is currently capable of analyzing mixed element injector patterns containing impinging like doublet or unlike triplet, showerhead, shear coaxial and swirl coaxial elements as long as only one element type exists in each injector core, baffle, or barrier zone. Real propellant properties of oxygen, hydrogen, methane, propane, and RP-1 are included in ROCCID. The properties of other propellants can be easily added. The analysis models in ROCCID can account for the influences of acoustic cavities, helmholtz resonators, and radial thrust chamber baffles on combustion stability. ROCCID also contains the logic to interactively create a combustor design which meets input performance and stability goals. A preliminary design results from the application of historical correlations to the input design requirements. The steady state performance and combustion stability of this design is evaluated using the analysis models, and ROCCID guides the user as to the design changes required to satisfy the user's performance and stability goals, including the design of stability aids. Output from ROCCID includes a formatted input file for the standardized JANNAF engine performance prediction procedure.

  9. A reporting protocol for thermochronologic modeling illustrated with data from the Grand Canyon

    NASA Astrophysics Data System (ADS)

    Flowers, Rebecca M.; Farley, Kenneth A.; Ketcham, Richard A.

    2015-12-01

    Apatite (U-Th)/He and fission-track dates, as well as 4He/3He and fission-track length data, provide rich thermal history information. However, numerous choices and assumptions are required on the long road from raw data and observations to potentially complex geologic interpretations. This paper outlines a conceptual framework for this path, with the aim of promoting a broader understanding of how thermochronologic conclusions are derived. The tiered structure consists of thermal history model inputs at Level 1, thermal history model outputs at Level 2, and geologic interpretations at Level 3. Because inverse thermal history modeling is at the heart of converting thermochronologic data to interpretation, for others to evaluate and reproduce conclusions derived from thermochronologic results it is necessary to publish all data required for modeling, report all model inputs, and clearly and completely depict model outputs. Here we suggest a generalized template for a model input table with which to arrange, report and explain the choice of inputs to thermal history models. Model inputs include the thermochronologic data, additional geologic information, and system- and model-specific parameters. As an example we show how the origin of discrepant thermochronologic interpretations in the Grand Canyon can be better understood by using this disciplined approach.

  10. Estimating economic impacts of timber-based industry expansion in northeastern Minnesota.

    Treesearch

    Daniel L. Erkkila; Dietmar W. Rose; Allen L. Lundgren

    1982-01-01

    Analysis of current and projected timber supplies in northeastern Minnesota indicates that expanded timber-based industrial activity could be supported. The impacts of a hypothetical industrial development scenario, including construction of waferboard plants and a wood-fueled power plant, were estimated using an input-output model. Development had noticeable impacts...

  11. TES L2 Lite Standard Products

    Atmospheric Science Data Center

    2015-07-21

    L2 Lite Standard Products The TES Lite products are intended to simplify TES data usage including data /model and data/data comparisons. This product can be used for science analysis ... PGE corrected a date range issue in the originally delivered standard output.  An updated set of  TES L2 Lite standard products was ...

  12. Center for the Study of Rhythmic Processes

    DTIC Science & Technology

    1990-12-01

    Mathematical modeling Neuromodulators . Regaerti-n Sensory feedback -9. A35𔄁ACT (Convtinue an reverse if necesusy and4 4onTify by WJoo number) The Center for...activation and movement, and the ability of the network to regenerate. Work on the STG included results on neuromodulators that change the output of the

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M

    This brochure describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bush, Brian W; Brunhart-Lupo, Nicholas J; Gruchalla, Kenny M

    This presentation describes a system dynamics simulation (SD) framework that supports an end-to-end analysis workflow that is optimized for deployment on ESIF facilities(Peregrine and the Insight Center). It includes (I) parallel and distributed simulation of SD models, (ii) real-time 3D visualization of running simulations, and (iii) comprehensive database-oriented persistence of simulation metadata, inputs, and outputs.

  15. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (The BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.

  16. Method for analyzing the chemical composition of liquid effluent from a direct contact condenser

    DOEpatents

    Bharathan, Desikan; Parent, Yves; Hassani, A. Vahab

    2001-01-01

    A computational modeling method for predicting the chemical, physical, and thermodynamic performance of a condenser using calculations based on equations of physics for heat, momentum and mass transfer and equations of equilibrium thermodynamics to determine steady state profiles of parameters throughout the condenser. The method includes providing a set of input values relating to a condenser including liquid loading, vapor loading, and geometric characteristics of the contact medium in the condenser. The geometric and packing characteristics of the contact medium include the dimensions and orientation of a channel in the contact medium. The method further includes simulating performance of the condenser using the set of input values to determine a related set of output values such as outlet liquid temperature, outlet flow rates, pressures, and the concentration(s) of one or more dissolved noncondensable gas species in the outlet liquid. The method may also include iteratively performing the above computation steps using a plurality of sets of input values and then determining whether each of the resulting output values and performance profiles satisfies acceptance criteria.

  17. Washington Play Fairway Analysis Geothermal GIS Data

    DOE Data Explorer

    Corina Forson

    2015-12-15

    This file contains file geodatabases of the Mount St. Helens seismic zone (MSHSZ), Wind River valley (WRV) and Mount Baker (MB) geothermal play-fairway sites in the Washington Cascades. The geodatabases include input data (feature classes) and output rasters (generated from modeling and interpolation) from the geothermal play-fairway in Washington State, USA. These data were gathered and modeled to provide an estimate of the heat and permeability potential within the play-fairways based on: mapped volcanic vents, hot springs and fumaroles, geothermometry, intrusive rocks, temperature-gradient wells, slip tendency, dilation tendency, displacement, displacement gradient, max coulomb shear stress, sigma 3, maximum shear strain rate, and dilational strain rate at 200m and 3 km depth. In addition this file contains layer files for each of the output rasters. For details on the areas of interest please see the 'WA_State_Play_Fairway_Phase_1_Technical_Report' in the download package. This submission also includes a file with the geothermal favorability of the Washington Cascade Range based off of an earlier statewide assessment. Additionally, within this file there are the maximum shear and dilational strain rate rasters for all of Washington State.

  18. ORACLS: A system for linear-quadratic-Gaussian control law design

    NASA Technical Reports Server (NTRS)

    Armstrong, E. S.

    1978-01-01

    A modern control theory design package (ORACLS) for constructing controllers and optimal filters for systems modeled by linear time-invariant differential or difference equations is described. Numerical linear-algebra procedures are used to implement the linear-quadratic-Gaussian (LQG) methodology of modern control theory. Algorithms are included for computing eigensystems of real matrices, the relative stability of a matrix, factored forms for nonnegative definite matrices, the solutions and least squares approximations to the solutions of certain linear matrix algebraic equations, the controllability properties of a linear time-invariant system, and the steady state covariance matrix of an open-loop stable system forced by white noise. Subroutines are provided for solving both the continuous and discrete optimal linear regulator problems with noise free measurements and the sampled-data optimal linear regulator problem. For measurement noise, duality theory and the optimal regulator algorithms are used to solve the continuous and discrete Kalman-Bucy filter problems. Subroutines are also included which give control laws causing the output of a system to track the output of a prescribed model.

  19. A comparison of two multi-variable integrator windup protection schemes

    NASA Technical Reports Server (NTRS)

    Mattern, Duane

    1993-01-01

    Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.

  20. TEA CO 2 Laser Simulator: A software tool to predict the output pulse characteristics of TEA CO 2 laser

    NASA Astrophysics Data System (ADS)

    Abdul Ghani, B.

    2005-09-01

    "TEA CO 2 Laser Simulator" has been designed to simulate the dynamic emission processes of the TEA CO 2 laser based on the six-temperature model. The program predicts the behavior of the laser output pulse (power, energy, pulse duration, delay time, FWHM, etc.) depending on the physical and geometrical input parameters (pressure ratio of gas mixture, reflecting area of the output mirror, media length, losses, filling and decay factors, etc.). Program summaryTitle of program: TEA_CO2 Catalogue identifier: ADVW Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVW Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer: P.IV DELL PC Setup: Atomic Energy Commission of Syria, Scientific Services Department, Mathematics and Informatics Division Operating system: MS-Windows 9x, 2000, XP Programming language: Delphi 6.0 No. of lines in distributed program, including test data, etc.: 47 315 No. of bytes in distributed program, including test data, etc.:7 681 109 Distribution format:tar.gz Classification: 15 Laser Physics Nature of the physical problem: "TEA CO 2 Laser Simulator" is a program that predicts the behavior of the laser output pulse by studying the effect of the physical and geometrical input parameters on the characteristics of the output laser pulse. The laser active medium consists of a CO 2-N 2-He gas mixture. Method of solution: Six-temperature model, for the dynamics emission of TEA CO 2 laser, has been adapted in order to predict the parameters of laser output pulses. A simulation of the laser electrical pumping was carried out using two approaches; empirical function equation (8) and differential equation (9). Typical running time: The program's running time mainly depends on both integration interval and step; for a 4 μs period of time and 0.001 μs integration step (defaults values used in the program), the running time will be about 4 seconds. Restrictions on the complexity: Using a very small integration step might leads to stop the program run due to the huge number of calculating points and to a small paging file size of the MS-Windows virtual memory. In such case, it is recommended to enlarge the paging file size to the appropriate size, or to use a bigger value of integration step.

  1. Dynamic output feedback control of a flexible air-breathing hypersonic vehicle via T-S fuzzy approach

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoxiang; Wu, Ligang; Hu, Changhua; Wang, Zhaoqiang; Gao, Huijun

    2014-08-01

    By utilising Takagi-Sugeno (T-S) fuzzy set approach, this paper addresses the robust H∞ dynamic output feedback control for the non-linear longitudinal model of flexible air-breathing hypersonic vehicles (FAHVs). The flight control of FAHVs is highly challenging due to the unique dynamic characteristics, and the intricate couplings between the engine and fight dynamics and external disturbance. Because of the dynamics' enormous complexity, currently, only the longitudinal dynamics models of FAHVs have been used for controller design. In this work, T-S fuzzy modelling technique is utilised to approach the non-linear dynamics of FAHVs, then a fuzzy model is developed for the output tracking problem of FAHVs. The fuzzy model contains parameter uncertainties and disturbance, which can approach the non-linear dynamics of FAHVs more exactly. The flexible models of FAHVs are difficult to measure because of the complex dynamics and the strong couplings, thus a full-order dynamic output feedback controller is designed for the fuzzy model. A robust H∞ controller is designed for the obtained closed-loop system. By utilising the Lyapunov functional approach, sufficient solvability conditions for such controllers are established in terms of linear matrix inequalities. Finally, the effectiveness of the proposed T-S fuzzy dynamic output feedback control method is demonstrated by numerical simulations.

  2. Statistical Downscaling and Bias Correction of Climate Model Outputs for Climate Change Impact Assessment in the U.S. Northeast

    NASA Technical Reports Server (NTRS)

    Ahmed, Kazi Farzan; Wang, Guiling; Silander, John; Wilson, Adam M.; Allen, Jenica M.; Horton, Radley; Anyah, Richard

    2013-01-01

    Statistical downscaling can be used to efficiently downscale a large number of General Circulation Model (GCM) outputs to a fine temporal and spatial scale. To facilitate regional impact assessments, this study statistically downscales (to 1/8deg spatial resolution) and corrects the bias of daily maximum and minimum temperature and daily precipitation data from six GCMs and four Regional Climate Models (RCMs) for the northeast United States (US) using the Statistical Downscaling and Bias Correction (SDBC) approach. Based on these downscaled data from multiple models, five extreme indices were analyzed for the future climate to quantify future changes of climate extremes. For a subset of models and indices, results based on raw and bias corrected model outputs for the present-day climate were compared with observations, which demonstrated that bias correction is important not only for GCM outputs, but also for RCM outputs. For future climate, bias correction led to a higher level of agreements among the models in predicting the magnitude and capturing the spatial pattern of the extreme climate indices. We found that the incorporation of dynamical downscaling as an intermediate step does not lead to considerable differences in the results of statistical downscaling for the study domain.

  3. Replacing Fortran Namelists with JSON

    NASA Astrophysics Data System (ADS)

    Robinson, T. E., Jr.

    2017-12-01

    Maintaining a log of input parameters for a climate model is very important to understanding potential causes for answer changes during the development stages. Additionally, since modern Fortran is now interoperable with C, a more modern approach to software infrastructure to include code written in C is necessary. Merging these two separate facets of climate modeling requires a quality control for monitoring changes to input parameters and model defaults that can work with both Fortran and C. JSON will soon replace namelists as the preferred key/value pair input in the GFDL model. By adding a JSON parser written in C into the model, the input can be used by all functions and subroutines in the model, errors can be handled by the model instead of by the internal namelist parser, and the values can be output into a single file that is easily parsable by readily available tools. Input JSON files can handle all of the functionality of a namelist while being portable between C and Fortran. Fortran wrappers using unlimited polymorphism are crucial to allow for simple and compact code which avoids the need for many subroutines contained in an interface. Errors can be handled with more detail by providing information about location of syntax errors or typos. The output JSON provides a ground truth for values that the model actually uses by providing not only the values loaded through the input JSON, but also any default values that were not included. This kind of quality control on model input is crucial for maintaining reproducibility and understanding any answer changes resulting from changes in the input.

  4. User assessment of smoke-dispersion models for wildland biomass burning.

    Treesearch

    Steve Breyfogle; Sue A. Ferguson

    1996-01-01

    Several smoke-dispersion models, which currently are available for modeling smoke from biomass burns, were evaluated for ease of use, availability of input data, and output data format. The input and output components of all models are listed, and differences in model physics are discussed. Each model was installed and run on a personal computer with a simple-case...

  5. Using multi-criteria analysis of simulation models to understand complex biological systems

    Treesearch

    Maureen C. Kennedy; E. David Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  6. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    PubMed

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Grid-coordinate generation program

    USGS Publications Warehouse

    Cosner, Oliver J.; Horwich, Esther

    1974-01-01

    This program description of the grid-coordinate generation program is written for computer users who are familiar with digital aquifer models. The program computes the coordinates for a variable grid -used in the 'Pinder Model' (a finite-difference aquifer simulator), for input to the CalComp GPCP (general purpose contouring program). The program adjusts the y-value by a user-supplied constant in order to transpose the origin of the model grid from the upper left-hand corner to the lower left-hand corner of the grid. The user has the options of, (1.) choosing the boundaries of the plot; (2.) adjusting the z-values (altitudes) by a constant; (3.) deleting superfluous z-values and (4.) subtracting the simulated surfaces from each other to obtain the decline. Output of this program includes the fixed format CNTL data cards and the other data cards required for input to GPCP. The output from GPCP then is used to produce a potentiometric map or a decline map by means of the CalComp plotter.

  8. Control of wind turbine generators connected to power systems

    NASA Technical Reports Server (NTRS)

    Hwang, H. H.; Mozeico, H. V.; Gilbert, L. J.

    1978-01-01

    A unique simulation model based on a Mode-O wind turbine is developed for simulating both speed and power control. An analytical representation for a wind turbine that employs blade pitch angle feedback control is presented, and a mathematical model is formulated. For Mode-O serving as a practical case study, results of a computer simulation of the model as applied to the problems of synchronization and dynamic stability are provided. It is shown that the speed and output of a wind turbine can be satisfactorily controlled within reasonable limits by employing the existing blade pitch control system under specified conditions. For power control, an additional excitation control is required so that the terminal voltage, output power factor, and armature current can be held within narrow limits. As a result, the variation of torque angle is limited even if speed control is not implemented simultaneously with power control. Design features of the ERDA/NASA 100-kW Mode-O wind turbine are included.

  9. Integrating predictive information into an agro-economic model to guide agricultural management

    NASA Astrophysics Data System (ADS)

    Zhang, Y.; Block, P.

    2016-12-01

    Skillful season-ahead climate predictions linked with responsive agricultural planning and management have the potential to reduce losses, if adopted by farmers, particularly for rainfed-dominated agriculture such as in Ethiopia. Precipitation predictions during the growing season in major agricultural regions of Ethiopia are used to generate predicted climate yield factors, which reflect the influence of precipitation amounts on crop yields and serve as inputs into an agro-economic model. The adapted model, originally developed by the International Food Policy Research Institute, produces outputs of economic indices (GDP, poverty rates, etc.) at zonal and national levels. Forecast-based approaches, in which farmers' actions are in response to forecasted conditions, are compared with no-forecast approaches in which farmers follow business as usual practices, expecting "average" climate conditions. The effects of farmer adoption rates, including the potential for reduced uptake due to poor predictions, and increasing forecast lead-time on economic outputs are also explored. Preliminary results indicate superior gains under forecast-based approaches.

  10. Methodologies for optimal resource allocation to the national space program and new space utilizations. Volume 1: Technical description

    NASA Technical Reports Server (NTRS)

    1971-01-01

    The optimal allocation of resources to the national space program over an extended time period requires the solution of a large combinatorial problem in which the program elements are interdependent. The computer model uses an accelerated search technique to solve this problem. The model contains a large number of options selectable by the user to provide flexible input and a broad range of output for use in sensitivity analyses of all entering elements. Examples of these options are budget smoothing under varied appropriation levels, entry of inflation and discount effects, and probabilistic output which provides quantified degrees of certainty that program costs will remain within planned budget. Criteria and related analytic procedures were established for identifying potential new space program directions. Used in combination with the optimal resource allocation model, new space applications can be analyzed in realistic perspective, including the advantage gain from existing space program plant and on-going programs such as the space transportation system.

  11. Effects of range-wide variation in climate and isolation on floral traits and reproductive output of Clarkia pulchella.

    PubMed

    Bontrager, Megan; Angert, Amy L

    2016-01-01

    Plant mating systems and geographic range limits are conceptually linked by shared underlying drivers, including landscape-level heterogeneity in climate and in species' abundance. Studies of how geography and climate interact to affect plant traits that influence mating system and population dynamics can lend insight to ecological and evolutionary processes shaping ranges. Here, we examined how spatiotemporal variation in climate affects reproductive output of a mixed-mating annual, Clarkia pulchella. We also tested the effects of population isolation and climate on mating-system-related floral traits across the range. We measured reproductive output and floral traits on herbarium specimens collected across the range of C. pulchella. We extracted climate data associated with specimens and derived a population isolation metric from a species distribution model. We then examined how predictors of reproductive output and floral traits vary among populations of increasing distance from the range center. Finally, we tested whether reproductive output and floral traits vary with increasing distance from the center of the range. Reproductive output decreased as summer precipitation decreased, and low precipitation may contribute to limiting the southern and western range edges of C. pulchella. High spring and summer temperatures are correlated with low herkogamy, but these climatic factors show contrasting spatial patterns in different quadrants of the range. Limiting factors differ among different parts of the range. Due to the partial decoupling of geography and environment, examining relationships between climate, reproductive output, and mating-system-related floral traits reveals spatial patterns that might be missed when focusing solely on geographic position. © 2016 Botanical Society of America.

  12. Numerical modeling of the hydrodynamics of the Northeastern Corridor Reserve in Puerto Rico

    NASA Astrophysics Data System (ADS)

    Salgado-Domínguez, G.; Canals, M.

    2016-02-01

    To develop an appropriate management plan for the marine section of the Northeast Corridor Reserve (NECR) of Puerto Rico it is necessary to understand the hydrodynamic connectivity between the different regions within the NECR. The USACE CMS Flow model has been implemented for the NECR using very high resolution telescoping grids, with a special focus on the complex coral reef areas of the La Cordillera Reefs Natural Reserve, established by the Department of Natural and Environmental Resources of Puerto Rico. To ensure correct application of boundary conditions and realistic representation of the tidal elevation within the NECR, water elevation model output data was compared with the Fajardo tide gauge; while the ocean current model output was compared with the depth-integrated observed currents at the CariCOOS Vieques Sound buoy. Comparison of model performance with buoy and tide gauge data has shown good agreement, however, further model tuning is necessary to optimize model performance. Further improvement of our models depends largely on obtaining more accurate boundary conditions as well as better wind forcing. We are currently implementing the USACE Particle Tracking Model (PTM) to characterize particle dispersion within the NECR. In the long-term, full 3D hydrodynamic models including riverine forcing hold the key to a complete understanding of larvae and sediment dispersion within the NECR.

  13. Modeling habitat distribution from organism occurrences and environmental data: Case study using anemonefishes and their sea anemone hosts

    USGS Publications Warehouse

    Guinotte, J.M.; Bartley, J.D.; Iqbal, A.; Fautin, D.G.; Buddemeier, R.W.

    2006-01-01

    We demonstrate the KGSMapper (Kansas Geological Survey Mapper), a straightforward, web-based biogeographic tool that uses environmental conditions of places where members of a taxon are known to occur to find other places containing suitable habitat for them. Using occurrence data for anemonefishes or their host sea anemones, and data for environmental parameters, we generated maps of suitable habitat for the organisms. The fact that the fishes are obligate symbionts of the anemones allowed us to validate the KGSMapper output: we were able to compare the inferred occurrence of the organism to that of the actual occurrence of its symbiont. Characterizing suitable habitat for these organisms in the Indo-West Pacific, the region where they naturally occur, can be used to guide conservation efforts, field work, etc.; defining suitable habitat for them in the Atlantic and eastern Pacific is relevant to identifying areas vulnerable to biological invasions. We advocate distinguishing between these 2 sorts of model output, terming the former maps of realized habitat and the latter maps of potential habitat. Creation of a niche model requires adding biotic data to the environmental data used for habitat maps: we included data on fish occurrences to infer anemone distribution and vice versa. Altering the selection of environmental variables allowed us to investigate which variables may exert the most influence on organism distribution. Adding variables does not necessarily improve precision of the model output. KGSMapper output distinguishes areas that fall within 1 standard deviation (SD) of the mean environmental variable values for places where members of the taxon occur, within 2 SD, and within the entire range of values; eliminating outliers or data known to be imprecise or inaccurate improved output precision mainly in the 2 SD range and beyond. Thus, KGSMapper is robust in the face of questionable data, offering the user a way to recognize and clean such data. It also functions well with sparse datasets. These features make it useful for biogeographic meta-analyses with the diverse, distributed datasets that are typical for marine organisms lacking direct commercial value. ?? Inter-Research 2006.

  14. Development of an Integrated Hydrologic Modeling System for Rainfall-Runoff Simulation

    NASA Astrophysics Data System (ADS)

    Lu, B.; Piasecki, M.

    2008-12-01

    This paper aims to present the development of an integrated hydrological model which involves functionalities of digital watershed processing, online data retrieval, hydrologic simulation and post-event analysis. The proposed system is intended to work as a back end to the CUAHSI HIS cyberinfrastructure developments. As a first step into developing this system, a physics-based distributed hydrologic model PIHM (Penn State Integrated Hydrologic Model) is wrapped into OpenMI(Open Modeling Interface and Environment ) environment so as to seamlessly interact with OpenMI compliant meteorological models. The graphical user interface is being developed from the openGIS application called MapWindows which permits functionality expansion through the addition of plug-ins. . Modules required to set up through the GUI workboard include those for retrieving meteorological data from existing database or meteorological prediction models, obtaining geospatial data from the output of digital watershed processing, and importing initial condition and boundary condition. They are connected to the OpenMI compliant PIHM to simulate rainfall-runoff processes and includes a module for automatically displaying output after the simulation. Online databases are accessed through the WaterOneFlow web services, and the retrieved data are either stored in an observation database(OD) following the schema of Observation Data Model(ODM) in case for time series support, or a grid based storage facility which may be a format like netCDF or a grid-based-data database schema . Specific development steps include the creation of a bridge to overcome interoperability issue between PIHM and the ODM, as well as the embedding of TauDEM (Terrain Analysis Using Digital Elevation Models) into the model. This module is responsible for developing watershed and stream network using digital elevation models. Visualizing and editing geospatial data is achieved by the usage of MapWinGIS, an ActiveX control developed by MapWindow team. After applying to the practical watershed, the performance of the model can be tested by the post-event analysis module.

  15. Light adaptation alters inner retinal inhibition to shape OFF retinal pathway signaling

    PubMed Central

    Mazade, Reece E.

    2016-01-01

    The retina adjusts its signaling gain over a wide range of light levels. A functional result of this is increased visual acuity at brighter luminance levels (light adaptation) due to shifts in the excitatory center-inhibitory surround receptive field parameters of ganglion cells that increases their sensitivity to smaller light stimuli. Recent work supports the idea that changes in ganglion cell spatial sensitivity with background luminance are due in part to inner retinal mechanisms, possibly including modulation of inhibition onto bipolar cells. To determine how the receptive fields of OFF cone bipolar cells may contribute to changes in ganglion cell resolution, the spatial extent and magnitude of inhibitory and excitatory inputs were measured from OFF bipolar cells under dark- and light-adapted conditions. There was no change in the OFF bipolar cell excitatory input with light adaptation; however, the spatial distributions of inhibitory inputs, including both glycinergic and GABAergic sources, became significantly narrower, smaller, and more transient. The magnitude and size of the OFF bipolar cell center-surround receptive fields as well as light-adapted changes in resting membrane potential were incorporated into a spatial model of OFF bipolar cell output to the downstream ganglion cells, which predicted an increase in signal output strength with light adaptation. We show a prominent role for inner retinal spatial signals in modulating the modeled strength of bipolar cell output to potentially play a role in ganglion cell visual sensitivity and acuity. PMID:26912599

  16. Effect of Topology Structure on the Output Performance of an Automobile Exhaust Thermoelectric Generator

    NASA Astrophysics Data System (ADS)

    Fang, W.; Quan, S. H.; Xie, C. J.; Ran, B.; Li, X. L.; Wang, L.; Jiao, Y. T.; Xu, T. W.

    2017-05-01

    The majority of the thermal energy released in an automotive internal combustion cycle is exhausted as waste heat through the tail pipe. This paper describes an automobile exhaust thermoelectric generator (AETEG), designed to recycle automobile waste heat. A model of the output characteristics of each thermoelectric device was established by testing their open circuit voltage and internal resistance, and combining the output characteristics. To better describe the relationship, the physical model was transformed into a topological model. The connection matrix was used to describe the relationship between any two thermoelectric devices in the topological structure. Different topological structures produced different power outputs; their output power was maximised by using an iterative algorithm to optimize the series-parallel electrical topology structure. The experimental results have shown that the output power of the optimal topology structure increases by 18.18% and 29.35% versus that of a pure in-series or parallel topology, respectively, and by 10.08% versus a manually defined structure (based on user experience). The thermoelectric conversion device increased energy efficiency by 40% when compared with a traditional car.

  17. Magnetic Field Satellite (Magsat) data processing system specifications

    NASA Technical Reports Server (NTRS)

    Berman, D.; Gomez, R.; Miller, A.

    1980-01-01

    The software specifications for the MAGSAT data processing system (MDPS) are presented. The MDPS is divided functionally into preprocessing of primary input data, data management, chronicle processing, and postprocessing. Data organization and validity, and checks of spacecraft and instrumentation are dicussed. Output products of the MDPS, including various plots and data tapes, are described. Formats for important tapes are presented. Dicussions and mathematical formulations for coordinate transformations and field model coefficients are included.

  18. Modeling of materials supply, demand and prices

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The societal, economic, and policy tradeoffs associated with materials processing and utilization, are discussed. The materials system provides the materials engineer with the system analysis required for formulate sound materials processing, utilization, and resource development policies and strategies. Materials system simulation and modeling research program including assessments of materials substitution dynamics, public policy implications, and materials process economics was expanded. This effort includes several collaborative programs with materials engineers, economists, and policy analysts. The technical and socioeconomic issues of materials recycling, input-output analysis, and technological change and productivity are examined. The major thrust areas in materials systems research are outlined.

  19. Evaluation of Supply Chain Efficiency Based on a Novel Network of Data Envelopment Analysis Model

    NASA Astrophysics Data System (ADS)

    Fu, Li Fang; Meng, Jun; Liu, Ying

    2015-12-01

    Performance evaluation of supply chain (SC) is a vital topic in SC management and inherently complex problems with multilayered internal linkages and activities of multiple entities. Recently, various Network Data Envelopment Analysis (NDEA) models, which opened the “black box” of conventional DEA, were developed and applied to evaluate the complex SC with a multilayer network structure. However, most of them are input or output oriented models which cannot take into consideration the nonproportional changes of inputs and outputs simultaneously. This paper extends the Slack-based measure (SBM) model to a nonradial, nonoriented network model named as U-NSBM with the presence of undesirable outputs in the SC. A numerical example is presented to demonstrate the applicability of the model in quantifying the efficiency and ranking the supply chain performance. By comparing with the CCR and U-SBM models, it is shown that the proposed model has higher distinguishing ability and gives feasible solution in the presence of undesirable outputs. Meanwhile, it provides more insights for decision makers about the source of inefficiency as well as the guidance to improve the SC performance.

  20. Shot-by-shot Spectrum Model for Rod-pinch, Pulsed Radiography Machines

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, William Monford

    A simplified model of bremsstrahlung production is developed for determining the x-ray spectrum output of a rod-pinch radiography machine, on a shot-by-shot basis, using the measured voltage, V(t), and current, I(t). The motivation for this model is the need for an agile means of providing shot-by-shot spectrum prediction, from a laptop or desktop computer, for quantitative radiographic analysis. Simplifying assumptions are discussed, and the model is applied to the Cygnus rod-pinch machine. Output is compared to wedge transmission data for a series of radiographs from shots with identical target objects. Resulting model enables variation of parameters in real time, thusmore » allowing for rapid optimization of the model across many shots. “Goodness of fit” is compared with output from LSP Particle-In-Cell code, as well as the Monte Carlo Neutron Propagation with Xrays (“MCNPX”) model codes, and is shown to provide an excellent predictive representation of the spectral output of the Cygnus machine. In conclusion, improvements to the model, specifically for application to other geometries, are discussed.« less

Top