Sample records for model output variables

  1. A proposed Kalman filter algorithm for estimation of unmeasured output variables for an F100 turbofan engine

    NASA Technical Reports Server (NTRS)

    Alag, Gurbux S.; Gilyard, Glenn B.

    1990-01-01

    To develop advanced control systems for optimizing aircraft engine performance, unmeasurable output variables must be estimated. The estimation has to be done in an uncertain environment and be adaptable to varying degrees of modeling errors and other variations in engine behavior over its operational life cycle. This paper represented an approach to estimate unmeasured output variables by explicitly modeling the effects of off-nominal engine behavior as biases on the measurable output variables. A state variable model accommodating off-nominal behavior is developed for the engine, and Kalman filter concepts are used to estimate the required variables. Results are presented from nonlinear engine simulation studies as well as the application of the estimation algorithm on actual flight data. The formulation presented has a wide range of application since it is not restricted or tailored to the particular application described.

  2. Multi-level emulation of complex climate model responses to boundary forcing data

    NASA Astrophysics Data System (ADS)

    Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter

    2018-04-01

    Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.

  3. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  4. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  5. Variable camber wing based on pneumatic artificial muscles

    NASA Astrophysics Data System (ADS)

    Yin, Weilong; Liu, Libo; Chen, Yijin; Leng, Jinsong

    2009-07-01

    As a novel bionic actuator, pneumatic artificial muscle has high power to weight ratio. In this paper, a variable camber wing with the pneumatic artificial muscle is developed. Firstly, the experimental setup to measure the static output force of pneumatic artificial muscle is designed. The relationship between the static output force and the air pressure is investigated. Experimental result shows the static output force of pneumatic artificial muscle decreases nonlinearly with increasing contraction ratio. Secondly, the finite element model of the variable camber wing is developed. Numerical results show that the tip displacement of the trailing-edge increases linearly with increasing external load and limited with the maximum static output force of pneumatic artificial muscles. Finally, the variable camber wing model is manufactured to validate the variable camber concept. Experimental result shows that the wing camber increases with increasing air pressure and that it compare very well with the FEM result.

  6. A Model for Optimizing the Combination of Solar Electricity Generation, Supply Curtailment, Transmission and Storage

    NASA Astrophysics Data System (ADS)

    Perez, Marc J. R.

    With extraordinary recent growth of the solar photovoltaic industry, it is paramount to address the biggest barrier to its high-penetration across global electrical grids: the inherent variability of the solar resource. This resource variability arises from largely unpredictable meteorological phenomena and from the predictable rotation of the earth around the sun and about its own axis. To achieve very high photovoltaic penetration, the imbalance between the variable supply of sunlight and demand must be alleviated. The research detailed herein consists of the development of a computational model which seeks to optimize the combination of 3 supply-side solutions to solar variability that minimizes the aggregate cost of electricity generated therefrom: Storage (where excess solar generation is stored when it exceeds demand for utilization when it does not meet demand), interconnection (where solar generation is spread across a large geographic area and electrically interconnected to smooth overall regional output) and smart curtailment (where solar capacity is oversized and excess generation is curtailed at key times to minimize the need for storage.). This model leverages a database created in the context of this doctoral work of satellite-derived photovoltaic output spanning 10 years at a daily interval for 64,000 unique geographic points across the globe. Underpinning the model's design and results, the database was used to further the understanding of solar resource variability at timescales greater than 1-day. It is shown that--as at shorter timescales--cloud/weather-induced solar variability decreases with geographic extent and that the geographic extent at which variability is mitigated increases with timescale and is modulated by the prevailing speed of clouds/weather systems. Unpredictable solar variability up to the timescale of 30 days is shown to be mitigated across a geographic extent of only 1500km if that geographic extent is oriented in a north/south bearing. Using technical and economic data reflecting today's real costs for solar generation technology, storage and electric transmission in combination with this model, we determined the minimum cost combination of these solutions to transform the variable output from solar plants into 3 distinct output profiles: A constant output equivalent to a baseload power plant, a well-defined seasonally-variable output with no weather-induced variability and a variable output but one that is 100% predictable on a multi-day ahead basis. In order to do this, over 14,000 model runs were performed by varying the desired output profile, the amount of energy curtailment, the penetration of solar energy and the geographic region across the continental United States. Despite the cost of supplementary electric transmission, geographic interconnection has the potential to reduce the levelized cost of electricity when meeting any of the studied output profiles by over 65% compared to when only storage is used. Energy curtailment, despite the cost of underutilizing solar energy capacity, has the potential to reduce the total cost of electricity when meeting any of the studied output profiles by over 75% compared to when only storage is used. The three variability mitigation strategies are thankfully not mutually exclusive. When combined at their ideal levels, each of the regions studied saw a reduction in cost of electricity of over 80% compared to when only energy storage is used to meet a specified output profile. When including current costs for solar generation, transmission and energy storage, an optimum configuration can conservatively provide guaranteed baseload power generation with solar across the entire continental United States (equivalent to a nuclear power plant with no down time) for less than 0.19 per kilowatt-hour. If solar is preferentially clustered in the southwest instead of evenly spread throughout the United States, and we adopt future expected costs for solar generation of 1 per watt, optimal model results show that meeting a 100% predictable output target with solar will cost no more than $0.08 per kilowatt-hour.

  7. A downscaling scheme for atmospheric variables to drive soil-vegetation-atmosphere transfer models

    NASA Astrophysics Data System (ADS)

    Schomburg, A.; Venema, V.; Lindau, R.; Ament, F.; Simmer, C.

    2010-09-01

    For driving soil-vegetation-transfer models or hydrological models, high-resolution atmospheric forcing data is needed. For most applications the resolution of atmospheric model output is too coarse. To avoid biases due to the non-linear processes, a downscaling system should predict the unresolved variability of the atmospheric forcing. For this purpose we derived a disaggregation system consisting of three steps: (1) a bi-quadratic spline-interpolation of the low-resolution data, (2) a so-called `deterministic' part, based on statistical rules between high-resolution surface variables and the desired atmospheric near-surface variables and (3) an autoregressive noise-generation step. The disaggregation system has been developed and tested based on high-resolution model output (400m horizontal grid spacing). A novel automatic search-algorithm has been developed for deriving the deterministic downscaling rules of step 2. When applied to the atmospheric variables of the lowest layer of the atmospheric COSMO-model, the disaggregation is able to adequately reconstruct the reference fields. Applying downscaling step 1 and 2, root mean square errors are decreased. Step 3 finally leads to a close match of the subgrid variability and temporal autocorrelation with the reference fields. The scheme can be applied to the output of atmospheric models, both for stand-alone offline simulations, and a fully coupled model system.

  8. Alpha 2 LASSO Data Bundles

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi

    2015-08-31

    The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  9. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  10. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  11. Unitary Response Regression Models

    ERIC Educational Resources Information Center

    Lipovetsky, S.

    2007-01-01

    The dependent variable in a regular linear regression is a numerical variable, and in a logistic regression it is a binary or categorical variable. In these models the dependent variable has varying values. However, there are problems yielding an identity output of a constant value which can also be modelled in a linear or logistic regression with…

  12. Simulation of streamflows and basin-wide hydrologic variables over several climate-change scenarios, Methow River basin, Washington

    USGS Publications Warehouse

    Voss, Frank D.; Mastin, Mark C.

    2012-01-01

    A database was developed to automate model execution and to provide users with Internet access to voluminous data products ranging from summary figures to model output timeseries. Database-enabled Internet tools were developed to allow users to create interactive graphs of output results based on their analysis needs. For example, users were able to create graphs by selecting time intervals, greenhouse gas emission scenarios, general circulation models, and specific hydrologic variables.

  13. Enabling intelligent copernicus services for carbon and water balance modeling of boreal forest ecosystems - North State

    NASA Astrophysics Data System (ADS)

    Häme, Tuomas; Mutanen, Teemu; Rauste, Yrjö; Antropov, Oleg; Molinier, Matthieu; Quegan, Shaun; Kantzas, Euripides; Mäkelä, Annikki; Minunno, Francesco; Atli Benediktsson, Jon; Falco, Nicola; Arnason, Kolbeinn; Storvold, Rune; Haarpaintner, Jörg; Elsakov, Vladimir; Rasinmäki, Jussi

    2015-04-01

    The objective of project North State, funded by Framework Program 7 of the European Union, is to develop innovative data fusion methods that exploit the new generation of multi-source data from Sentinels and other satellites in an intelligent, self-learning framework. The remote sensing outputs are interfaced with state-of-the-art carbon and water flux models for monitoring the fluxes over boreal Europe to reduce current large uncertainties. This will provide a paradigm for the development of products for future Copernicus services. The models to be interfaced are a dynamic vegetation model and a light use efficiency model. We have identified four groups of variables that will be estimated with remote sensed data: land cover variables, forest characteristics, vegetation activity, and hydrological variables. The estimates will be used as model inputs and to validate the model outputs. The earth observation variables are computed as automatically as possible, with an objective to completely automatic estimation. North State has two sites for intensive studies in southern and northern Finland, respectively, one in Iceland and one in state Komi of Russia. Additionally, the model input variables will be estimated and models applied over European boreal and sub-arctic region from Ural Mountains to Iceland. The accuracy assessment of the earth observation variables will follow statistical sampling design. Model output predictions are compared to earth observation variables. Also flux tower measurements are applied in the model assessment. In the paper, results of hyperspectral, Sentinel-1, and Landsat data and their use in the models is presented. Also an example of a completely automatic land cover class prediction is reported.

  14. Assessment of reservoir system variable forecasts

    NASA Astrophysics Data System (ADS)

    Kistenmacher, Martin; Georgakakos, Aris P.

    2015-05-01

    Forecast ensembles are a convenient means to model water resources uncertainties and to inform planning and management processes. For multipurpose reservoir systems, forecast types include (i) forecasts of upcoming inflows and (ii) forecasts of system variables and outputs such as reservoir levels, releases, flood damage risks, hydropower production, water supply withdrawals, water quality conditions, navigation opportunities, and environmental flows, among others. Forecasts of system variables and outputs are conditional on forecasted inflows as well as on specific management policies and can provide useful information for decision-making processes. Unlike inflow forecasts (in ensemble or other forms), which have been the subject of many previous studies, reservoir system variable and output forecasts are not formally assessed in water resources management theory or practice. This article addresses this gap and develops methods to rectify potential reservoir system forecast inconsistencies and improve the quality of management-relevant information provided to stakeholders and managers. The overarching conclusion is that system variable and output forecast consistency is critical for robust reservoir management and needs to be routinely assessed for any management model used to inform planning and management processes. The above are demonstrated through an application from the Sacramento-American-San Joaquin reservoir system in northern California.

  15. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    PubMed

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  16. The effect of signal variability on the histograms of anthropomorphic channel outputs: factors resulting in non-normally distributed data

    NASA Astrophysics Data System (ADS)

    Elshahaby, Fatma E. A.; Ghaly, Michael; Jha, Abhinav K.; Frey, Eric C.

    2015-03-01

    Model Observers are widely used in medical imaging for the optimization and evaluation of instrumentation, acquisition parameters and image reconstruction and processing methods. The channelized Hotelling observer (CHO) is a commonly used model observer in nuclear medicine and has seen increasing use in other modalities. An anthropmorphic CHO consists of a set of channels that model some aspects of the human visual system and the Hotelling Observer, which is the optimal linear discriminant. The optimality of the CHO is based on the assumption that the channel outputs for data with and without the signal present have a multivariate normal distribution with equal class covariance matrices. The channel outputs result from the dot product of channel templates with input images and are thus the sum of a large number of random variables. The central limit theorem is thus often used to justify the assumption that the channel outputs are normally distributed. In this work, we aim to examine this assumption for realistically simulated nuclear medicine images when various types of signal variability are present.

  17. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  18. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  19. Hydrologic Implications of Dynamical and Statistical Approaches to Downscaling Climate Model Outputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wood, Andrew W; Leung, Lai R; Sridhar, V

    Six approaches for downscaling climate model outputs for use in hydrologic simulation were evaluated, with particular emphasis on each method's ability to produce precipitation and other variables used to drive a macroscale hydrology model applied at much higher spatial resolution than the climate model. Comparisons were made on the basis of a twenty-year retrospective (1975–1995) climate simulation produced by the NCAR-DOE Parallel Climate Model (PCM), and the implications of the comparison for a future (2040–2060) PCM climate scenario were also explored. The six approaches were made up of three relatively simple statistical downscaling methods – linear interpolation (LI), spatial disaggregationmore » (SD), and bias-correction and spatial disaggregation (BCSD) – each applied to both PCM output directly (at T42 spatial resolution), and after dynamical downscaling via a Regional Climate Model (RCM – at ½-degree spatial resolution), for downscaling the climate model outputs to the 1/8-degree spatial resolution of the hydrological model. For the retrospective climate simulation, results were compared to an observed gridded climatology of temperature and precipitation, and gridded hydrologic variables resulting from forcing the hydrologic model with observations. The most significant findings are that the BCSD method was successful in reproducing the main features of the observed hydrometeorology from the retrospective climate simulation, when applied to both PCM and RCM outputs. Linear interpolation produced better results using RCM output than PCM output, but both methods (PCM-LI and RCM-LI) lead to unacceptably biased hydrologic simulations. Spatial disaggregation of the PCM output produced results similar to those achieved with the RCM interpolated output; nonetheless, neither PCM nor RCM output was useful for hydrologic simulation purposes without a bias-correction step. For the future climate scenario, only the BCSD-method (using PCM or RCM) was able to produce hydrologically plausible results. With the BCSD method, the RCM-derived hydrology was more sensitive to climate change than the PCM-derived hydrology.« less

  20. Predicting Time Series Outputs and Time-to-Failure for an Aircraft Controller Using Bayesian Modeling

    NASA Technical Reports Server (NTRS)

    He, Yuning

    2015-01-01

    Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.

  1. Multi-model analysis of terrestrial carbon cycles in Japan: limitations and implications of model calibration using eddy flux observations

    NASA Astrophysics Data System (ADS)

    Ichii, K.; Suzuki, T.; Kato, T.; Ito, A.; Hajima, T.; Ueyama, M.; Sasai, T.; Hirata, R.; Saigusa, N.; Ohtani, Y.; Takagi, K.

    2010-07-01

    Terrestrial biosphere models show large differences when simulating carbon and water cycles, and reducing these differences is a priority for developing more accurate estimates of the condition of terrestrial ecosystems and future climate change. To reduce uncertainties and improve the understanding of their carbon budgets, we investigated the utility of the eddy flux datasets to improve model simulations and reduce variabilities among multi-model outputs of terrestrial biosphere models in Japan. Using 9 terrestrial biosphere models (Support Vector Machine - based regressions, TOPS, CASA, VISIT, Biome-BGC, DAYCENT, SEIB, LPJ, and TRIFFID), we conducted two simulations: (1) point simulations at four eddy flux sites in Japan and (2) spatial simulations for Japan with a default model (based on original settings) and a modified model (based on model parameter tuning using eddy flux data). Generally, models using default model settings showed large deviations in model outputs from observation with large model-by-model variability. However, after we calibrated the model parameters using eddy flux data (GPP, RE and NEP), most models successfully simulated seasonal variations in the carbon cycle, with less variability among models. We also found that interannual variations in the carbon cycle are mostly consistent among models and observations. Spatial analysis also showed a large reduction in the variability among model outputs. This study demonstrated that careful validation and calibration of models with available eddy flux data reduced model-by-model differences. Yet, site history, analysis of model structure changes, and more objective procedure of model calibration should be included in the further analysis.

  2. Application of variable-gain output feedback for high-alpha control

    NASA Technical Reports Server (NTRS)

    Ostroff, Aaron J.

    1990-01-01

    A variable-gain, optimal, discrete, output feedback design approach that is applied to a nonlinear flight regime is described. The flight regime covers a wide angle-of-attack range that includes stall and post stall. The paper includes brief descriptions of the variable-gain formulation, the discrete-control structure and flight equations used to apply the design approach, and the high performance airplane model used in the application. Both linear and nonlinear analysis are shown for a longitudinal four-model design case with angles of attack of 5, 15, 35, and 60 deg. Linear and nonlinear simulations are compared for a single-point longitudinal design at 60 deg angle of attack. Nonlinear simulations for the four-model, multi-mode, variable-gain design include a longitudinal pitch-up and pitch-down maneuver and high angle-of-attack regulation during a lateral maneuver.

  3. Evaluation of Data-Driven Models for Predicting Solar Photovoltaics Power Output

    DOE PAGES

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    2017-09-10

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  4. Laser system refinements to reduce variability in infarct size in the rat photothrombotic stroke model

    PubMed Central

    Alaverdashvili, Mariam; Paterson, Phyllis G.; Bradley, Michael P.

    2015-01-01

    Background The rat photothrombotic stroke model can induce brain infarcts with reasonable biological variability. Nevertheless, we observed unexplained high inter-individual variability despite using a rigorous protocol. Of the three major determinants of infarct volume, photosensitive dye concentration and illumination period were strictly controlled, whereas undetected fluctuation in laser power output was suspected to account for the variability. New method The frequently utilized Diode Pumped Solid State (DPSS) lasers emitting 532 nm (green) light can exhibit fluctuations in output power due to temperature and input power alterations. The polarization properties of the Nd:YAG and Nd:YVO4 crystals commonly used in these lasers are another potential source of fluctuation, since one means of controlling output power uses a polarizer with a variable transmission axis. Thus, the properties of DPSS lasers and the relationship between power output and infarct size were explored. Results DPSS laser beam intensity showed considerable variation. Either a polarizer or a variable neutral density filter allowed adjustment of a polarized laser beam to the desired intensity. When the beam was unpolarized, the experimenter was restricted to using a variable neutral density filter. Comparison with existing method(s) Our refined approach includes continuous monitoring of DPSS laser intensity via beam sampling using a pellicle beamsplitter and photodiode sensor. This guarantees the desired beam intensity at the targeted brain area during stroke induction, with the intensity controlled either through a polarizer or variable neutral density filter. Conclusions Continuous monitoring and control of laser beam intensity is critical for ensuring consistent infarct size. PMID:25840363

  5. Probabilistic Physics-Based Risk Tools Used to Analyze the International Space Station Electrical Power System Output

    NASA Technical Reports Server (NTRS)

    Patel, Bhogila M.; Hoge, Peter A.; Nagpal, Vinod K.; Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2004-01-01

    This paper describes the methods employed to apply probabilistic modeling techniques to the International Space Station (ISS) power system. These techniques were used to quantify the probabilistic variation in the power output, also called the response variable, due to variations (uncertainties) associated with knowledge of the influencing factors called the random variables. These uncertainties can be due to unknown environmental conditions, variation in the performance of electrical power system components or sensor tolerances. Uncertainties in these variables, cause corresponding variations in the power output, but the magnitude of that effect varies with the ISS operating conditions, e.g. whether or not the solar panels are actively tracking the sun. Therefore, it is important to quantify the influence of these uncertainties on the power output for optimizing the power available for experiments.

  6. Heart Performance Determination by Visualization in Larval Fishes: Influence of Alternative Models for Heart Shape and Volume

    PubMed Central

    Perrichon, Prescilla; Grosell, Martin; Burggren, Warren W.

    2017-01-01

    Understanding cardiac function in developing larval fishes is crucial for assessing their physiological condition and overall health. Cardiac output measurements in transparent fish larvae and other vertebrates have long been made by analyzing videos of the beating heart, and modeling this structure using a conventional simple prolate spheroid shape model. However, the larval fish heart changes shape during early development and subsequent maturation, but no consideration has been made of the effect of different heart geometries on cardiac output estimation. The present study assessed the validity of three different heart models (the “standard” prolate spheroid model as well as a cylinder and cone tip + cylinder model) applied to digital images of complete cardiac cycles in larval mahi-mahi and red drum. The inherent error of each model was determined to allow for more precise calculation of stroke volume and cardiac output. The conventional prolate spheroid and cone tip + cylinder models yielded significantly different stroke volume values at 56 hpf in red drum and from 56 to 104 hpf in mahi. End-diastolic and stroke volumes modeled by just a simple cylinder shape were 30–50% higher compared to the conventional prolate spheroid. However, when these values of stroke volume multiplied by heart rate to calculate cardiac output, no significant differences between models emerged because of considerable variability in heart rate. Essentially, the conventional prolate spheroid shape model provides the simplest measurement with lowest variability of stroke volume and cardiac output. However, assessment of heart function—especially if stroke volume is the focus of the study—should consider larval heart shape, with different models being applied on a species-by-species and developmental stage-by-stage basis for best estimation of cardiac output. PMID:28725199

  7. Fusion of Hard and Soft Information in Nonparametric Density Estimation

    DTIC Science & Technology

    2015-06-10

    and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for

  8. Distinguishing the Forest from the Trees: Synthesizing IHRMP Research

    Treesearch

    Gregory B. Greenwood

    1991-01-01

    A conceptual model of hardwood rangelands as multi-output resource system is developed and used to achieve a synthesis of Integrated Hardwood Range Management Program (IHRMP) research. The model requires the definition of state variables which characterize the system at any time, processes that move the system to different states, outputs...

  9. Spatiotemporal variability of snow depletion curves derived from SNODAS for the conterminous United States, 2004-2013

    USGS Publications Warehouse

    Driscoll, Jessica; Hay, Lauren E.; Bock, Andrew R.

    2017-01-01

    Assessment of water resources at a national scale is critical for understanding their vulnerability to future change in policy and climate. Representation of the spatiotemporal variability in snowmelt processes in continental-scale hydrologic models is critical for assessment of water resource response to continued climate change. Continental-extent hydrologic models such as the U.S. Geological Survey National Hydrologic Model (NHM) represent snowmelt processes through the application of snow depletion curves (SDCs). SDCs relate normalized snow water equivalent (SWE) to normalized snow covered area (SCA) over a snowmelt season for a given modeling unit. SDCs were derived using output from the operational Snow Data Assimilation System (SNODAS) snow model as daily 1-km gridded SWE over the conterminous United States. Daily SNODAS output were aggregated to a predefined watershed-scale geospatial fabric and used to also calculate SCA from October 1, 2004 to September 30, 2013. The spatiotemporal variability in SNODAS output at the watershed scale was evaluated through the spatial distribution of the median and standard deviation for the time period. Representative SDCs for each watershed-scale modeling unit over the conterminous United States (n = 54,104) were selected using a consistent methodology and used to create categories of snowmelt based on SDC shape. The relation of SDC categories to the topographic and climatic variables allow for national-scale categorization of snowmelt processes.

  10. Study of hydrological extremes - floods and droughts in global river basins using satellite data and model output

    NASA Astrophysics Data System (ADS)

    Lakshmi, V.; Fayne, J.; Bolten, J. D.

    2016-12-01

    We will use satellite data from TRMM (Tropical Rainfall Measurement Mission), AMSR (Advanced Microwave Scanning Radiometer), GRACE (Gravity Recovery and Climate Experiment) and MODIS (Moderate Resolution Spectroradiometer) and model output from NASA GLDAS (Global Land Data Assimilation System) to understand the linkages between hydrological variables. These hydrological variables include precipitation soil moisture vegetation index surface temperature ET and total water. We will present results for major river basins such as Amazon, Colorado, Mississippi, California, Danube, Nile, Congo, Yangtze Mekong, Murray-Darling and Ganga-Brahmaputra.The major floods and droughts in these watersheds will be mapped in time and space using the satellite data and model outputs mentioned above. We will analyze the various hydrological variables and conduct a synergistic study during times of flood and droughts. In order to compare hydrological variables between river basins with vastly different climate and land use we construct an index that is scaled by the climatology. This allows us to compare across different climate, topography, soils and land use regimes. The analysis shows that the hydrological variables derived from satellite data and NASA models clearly reflect the hydrological extremes. This is especially true when data from different sensors are analyzed together - for example rainfall data from TRMM and total water data from GRACE. Such analyses will help to construct prediction tools for water resources applications.

  11. A new open-loop fiber optic gyro error compensation method based on angular velocity error modeling.

    PubMed

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-02-27

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.42%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity.

  12. A New Open-Loop Fiber Optic Gyro Error Compensation Method Based on Angular Velocity Error Modeling

    PubMed Central

    Zhang, Yanshun; Guo, Yajing; Li, Chunyu; Wang, Yixin; Wang, Zhanqing

    2015-01-01

    With the open-loop fiber optic gyro (OFOG) model, output voltage and angular velocity can effectively compensate OFOG errors. However, the model cannot reflect the characteristics of OFOG errors well when it comes to pretty large dynamic angular velocities. This paper puts forward a modeling scheme with OFOG output voltage u and temperature T as the input variables and angular velocity error Δω as the output variable. Firstly, the angular velocity error Δω is extracted from OFOG output signals, and then the output voltage u, temperature T and angular velocity error Δω are used as the learning samples to train a Radial-Basis-Function (RBF) neural network model. Then the nonlinear mapping model over T, u and Δω is established and thus Δω can be calculated automatically to compensate OFOG errors according to T and u. The results of the experiments show that the established model can be used to compensate the nonlinear OFOG errors. The maximum, the minimum and the mean square error of OFOG angular velocity are decreased by 97.0%, 97.1% and 96.5% relative to their initial values, respectively. Compared with the direct modeling of gyro angular velocity, which we researched before, the experimental results of the compensating method proposed in this paper are further reduced by 1.6%, 1.4% and 1.2%, respectively, so the performance of this method is better than that of the direct modeling for gyro angular velocity. PMID:25734642

  13. Sub-daily Statistical Downscaling of Meteorological Variables Using Neural Networks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kumar, Jitendra; Brooks, Bjørn-Gustaf J.; Thornton, Peter E

    2012-01-01

    A new open source neural network temporal downscaling model is described and tested using CRU-NCEP reanal ysis and CCSM3 climate model output. We downscaled multiple meteorological variables in tandem from monthly to sub-daily time steps while also retaining consistent correlations between variables. We found that our feed forward, error backpropagation approach produced synthetic 6 hourly meteorology with biases no greater than 0.6% across all variables and variance that was accurate within 1% for all variables except atmospheric pressure, wind speed, and precipitation. Correlations between downscaled output and the expected (original) monthly means exceeded 0.99 for all variables, which indicates thatmore » this approach would work well for generating atmospheric forcing data consistent with mass and energy conserved GCM output. Our neural network approach performed well for variables that had correlations to other variables of about 0.3 and better and its skill was increased by downscaling multiple correlated variables together. Poor replication of precipitation intensity however required further post-processing in order to obtain the expected probability distribution. The concurrence of precipitation events with expected changes in sub ordinate variables (e.g., less incident shortwave radiation during precipitation events) were nearly as consistent in the downscaled data as in the training data with probabilities that differed by no more than 6%. Our downscaling approach requires training data at the target time step and relies on a weak assumption that climate variability in the extrapolated data is similar to variability in the training data.« less

  14. Hydrological responses to dynamically and statistically downscaled climate model output

    USGS Publications Warehouse

    Wilby, R.L.; Hay, L.E.; Gutowski, W.J.; Arritt, R.W.; Takle, E.S.; Pan, Z.; Leavesley, G.H.; Clark, M.P.

    2000-01-01

    Daily rainfall and surface temperature series were simulated for the Animas River basin, Colorado using dynamically and statistically downscaled output from the National Center for Environmental Prediction/National Center for Atmospheric Research (NCEP/NCAR) re-analysis. A distributed hydrological model was then applied to the downscaled data. Relative to raw NCEP output, downscaled climate variables provided more realistic stimulations of basin scale hydrology. However, the results highlight the sensitivity of modeled processes to the choice of downscaling technique, and point to the need for caution when interpreting future hydrological scenarios.

  15. A Method for Evaluation of Model-Generated Vertical Profiles of Meteorological Variables

    DTIC Science & Technology

    2016-03-01

    3 2.1 RAOB Soundings and WRF Output for Profile Generation 3 2.2 Height-Based Profiles 5 2.3 Pressure-Based Profiles 5 3. Comparisons 8 4...downward arrow. The blue lines represent sublayers with sublayer means indicated by red triangles. Circles indicate the observations or WRF output...9 Table 3 Sample of differences in listed variables derived from WRF and RAOB data

  16. Impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling

    NASA Astrophysics Data System (ADS)

    Chen, Jie; Li, Chao; Brissette, François P.; Chen, Hua; Wang, Mingna; Essou, Gilles R. C.

    2018-05-01

    Bias correction is usually implemented prior to using climate model outputs for impact studies. However, bias correction methods that are commonly used treat climate variables independently and often ignore inter-variable dependencies. The effects of ignoring such dependencies on impact studies need to be investigated. This study aims to assess the impacts of correcting the inter-variable correlation of climate model outputs on hydrological modeling. To this end, a joint bias correction (JBC) method which corrects the joint distribution of two variables as a whole is compared with an independent bias correction (IBC) method; this is considered in terms of correcting simulations of precipitation and temperature from 26 climate models for hydrological modeling over 12 watersheds located in various climate regimes. The results show that the simulated precipitation and temperature are considerably biased not only in the individual distributions, but also in their correlations, which in turn result in biased hydrological simulations. In addition to reducing the biases of the individual characteristics of precipitation and temperature, the JBC method can also reduce the bias in precipitation-temperature (P-T) correlations. In terms of hydrological modeling, the JBC method performs significantly better than the IBC method for 11 out of the 12 watersheds over the calibration period. For the validation period, the advantages of the JBC method are greatly reduced as the performance becomes dependent on the watershed, GCM and hydrological metric considered. For arid/tropical and snowfall-rainfall-mixed watersheds, JBC performs better than IBC. For snowfall- or rainfall-dominated watersheds, however, the two methods behave similarly, with IBC performing somewhat better than JBC. Overall, the results emphasize the advantages of correcting the P-T correlation when using climate model-simulated precipitation and temperature to assess the impact of climate change on watershed hydrology. However, a thorough validation and a comparison with other methods are recommended before using the JBC method, since it may perform worse than the IBC method for some cases due to bias nonstationarity of climate model outputs.

  17. Time-response shaping using output to input saturation transformation

    NASA Astrophysics Data System (ADS)

    Chambon, E.; Burlion, L.; Apkarian, P.

    2018-03-01

    For linear systems, the control law design is often performed so that the resulting closed loop meets specific frequency-domain requirements. However, in many cases, it may be observed that the obtained controller does not enforce time-domain requirements amongst which the objective of keeping a scalar output variable in a given interval. In this article, a transformation is proposed to convert prescribed bounds on an output variable into time-varying saturations on the synthesised linear scalar control law. This transformation uses some well-chosen time-varying coefficients so that the resulting time-varying saturation bounds do not overlap in the presence of disturbances. Using an anti-windup approach, it is obtained that the origin of the resulting closed loop is globally asymptotically stable and that the constrained output variable satisfies the time-domain constraints in the presence of an unknown finite-energy-bounded disturbance. An application to a linear ball and beam model is presented.

  18. Simscape Modeling of a Custom Closed-Volume Tank

    NASA Technical Reports Server (NTRS)

    Fischer, Nathaniel P.

    2015-01-01

    The library for Mathworks Simscape does not currently contain a model for a closed volume fluid tank where the ullage pressure is variable. In order to model a closed-volume variable ullage pressure tank, it was necessary to consider at least two separate cases: a vertical cylinder, and a sphere. Using library components, it was possible to construct a rough model for the cylindrical tank. It was not possible to construct a model for a spherical tank, using library components, due to the variable area. It was decided that, for these cases, it would be preferable to create a custom library component to represent each case, using the Simscape language. Once completed, the components were added to models, where filling and draining the tanks could be simulated. When the models were performing as expected, it was necessary to generate code from the models and run them in Trick (a real-time simulation program). The data output from Trick was then compared to the output from Simscape and found to be within acceptable limits.

  19. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  20. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  1. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  2. Dynamics of the Antarctic Circumpolar Current. Evidence for Topographic Effects from Altimeter Data and Numerical Model Output

    NASA Technical Reports Server (NTRS)

    Gille, Sarah T.

    1995-01-01

    Geosat altimeter data and numerical model output are used to examine the circulation and dynamics of the Antarctic Circumpolar Current (ACC). The mean sea surface height across the ACC has been reconstructed from height variability measured by the altimeter, without assuming prior knowledge of the geoid. The results indicate locations for the Subantarctic and Polar Fronts which are consistent with in situ observations and indicate that the fronts are substantially steered by bathymetry. Detailed examination of spatial and temporal variability indicates a spatial decorrelation scale of 85 km and a temporal e-folding scale of 34 days. Empirical Orthogonal Function analysis suggests that the scales of motion are relatively short, occuring on 1000 km length-scales rather than basin or global scales. The momentum balance of the ACC has been investigated using output from the high resolution primitive equation model in combination with altimeter data. In the Semtner-Chervin quarter-degree general circulation model topographic form stress is the dominant process balancing the surface wind forcing. In stream coordinates, the dominant effect transporting momentum across the ACC is bibarmonic friction. Potential vorticity is considered on Montgomery streamlines in the model output and along surface streamlines in model and altimeter data. (AN)

  3. Random Predictor Models for Rigorous Uncertainty Quantification: Part 2

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean, the variance, and the range of the model's parameter, thus of the output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, is bounded rigorously.

  4. Random Predictor Models for Rigorous Uncertainty Quantification: Part 1

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2015-01-01

    This and a companion paper propose techniques for constructing parametric mathematical models describing key features of the distribution of an output variable given input-output data. By contrast to standard models, which yield a single output value at each value of the input, Random Predictors Models (RPMs) yield a random variable at each value of the input. Optimization-based strategies for calculating RPMs having a polynomial dependency on the input and a linear dependency on the parameters are proposed. These formulations yield RPMs having various levels of fidelity in which the mean and the variance of the model's parameters, thus of the predicted output, are prescribed. As such they encompass all RPMs conforming to these prescriptions. The RPMs are optimal in the sense that they yield the tightest predictions for which all (or, depending on the formulation, most) of the observations are less than a fixed number of standard deviations from the mean prediction. When the data satisfies mild stochastic assumptions, and the optimization problem(s) used to calculate the RPM is convex (or, when its solution coincides with the solution to an auxiliary convex problem), the model's reliability, which is the probability that a future observation would be within the predicted ranges, can be bounded tightly and rigorously.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moslehi, Salim; Reddy, T. Agami; Katipamula, Srinivas

    This research was undertaken to evaluate different inverse models for predicting power output of solar photovoltaic (PV) systems under different practical scenarios. In particular, we have investigated whether PV power output prediction accuracy can be improved if module/cell temperature was measured in addition to climatic variables, and also the extent to which prediction accuracy degrades if solar irradiation is not measured on the plane of array but only on a horizontal surface. We have also investigated the significance of different independent or regressor variables, such as wind velocity and incident angle modifier in predicting PV power output and cell temperature.more » The inverse regression model forms have been evaluated both in terms of their goodness-of-fit, and their accuracy and robustness in terms of their predictive performance. Given the accuracy of the measurements, expected CV-RMSE of hourly power output prediction over the year varies between 3.2% and 8.6% when only climatic data are used. Depending on what type of measured climatic and PV performance data is available, different scenarios have been identified and the corresponding appropriate modeling pathways have been proposed. The corresponding models are to be implemented on a controller platform for optimum operational planning of microgrids and integrated energy systems.« less

  6. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    NASA Astrophysics Data System (ADS)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  7. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2014-12-01

    We have developed a cloud-enabled web-service system that empowers physics-based, multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks. The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the observational datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation, (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs, and (3) ECMWF reanalysis outputs for several environmental variables in order to supplement observational datasets. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, (4) the calculation of difference between two variables, and (5) the conditional sampling of one physical variable with respect to another variable. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA will be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. In order to support 30+ simultaneous users during the school, we have deployed CMDA to the Amazon cloud environment. The cloud-enabled CMDA will provide each student with a virtual machine while the user interaction with the system will remain the same through web-browser interfaces. The summer school will serve as a valuable testbed for the tool development, preparing CMDA to serve its target community: Earth-science modeling and model-analysis community.

  8. Simulation Framework for Teaching in Modeling and Simulation Areas

    ERIC Educational Resources Information Center

    De Giusti, Marisa Raquel; Lira, Ariel Jorge; Villarreal, Gonzalo Lujan

    2008-01-01

    Simulation is the process of executing a model that describes a system with enough detail; this model has its entities, an internal state, some input and output variables and a list of processes bound to these variables. Teaching a simulation language such as general purpose simulation system (GPSS) is always a challenge, because of the way it…

  9. The Schaake shuffle: A method for reconstructing space-time variability in forecasted precipitation and temperature fields

    USGS Publications Warehouse

    Clark, M.R.; Gangopadhyay, S.; Hay, L.; Rajagopalan, B.; Wilby, R.

    2004-01-01

    A number of statistical methods that are used to provide local-scale ensemble forecasts of precipitation and temperature do not contain realistic spatial covariability between neighboring stations or realistic temporal persistence for subsequent forecast lead times. To demonstrate this point, output from a global-scale numerical weather prediction model is used in a stepwise multiple linear regression approach to downscale precipitation and temperature to individual stations located in and around four study basins in the United States. Output from the forecast model is downscaled for lead times up to 14 days. Residuals in the regression equation are modeled stochastically to provide 100 ensemble forecasts. The precipitation and temperature ensembles from this approach have a poor representation of the spatial variability and temporal persistence. The spatial correlations for downscaled output are considerably lower than observed spatial correlations at short forecast lead times (e.g., less than 5 days) when there is high accuracy in the forecasts. At longer forecast lead times, the downscaled spatial correlations are close to zero. Similarly, the observed temporal persistence is only partly present at short forecast lead times. A method is presented for reordering the ensemble output in order to recover the space-time variability in precipitation and temperature fields. In this approach, the ensemble members for a given forecast day are ranked and matched with the rank of precipitation and temperature data from days randomly selected from similar dates in the historical record. The ensembles are then reordered to correspond to the original order of the selection of historical data. Using this approach, the observed intersite correlations, intervariable correlations, and the observed temporal persistence are almost entirely recovered. This reordering methodology also has applications for recovering the space-time variability in modeled streamflow. ?? 2004 American Meteorological Society.

  10. An effective drift correction for dynamical downscaling of decadal global climate predictions

    NASA Astrophysics Data System (ADS)

    Paeth, Heiko; Li, Jingmin; Pollinger, Felix; Müller, Wolfgang A.; Pohlmann, Holger; Feldmann, Hendrik; Panitz, Hans-Jürgen

    2018-04-01

    Initialized decadal climate predictions with coupled climate models are often marked by substantial climate drifts that emanate from a mismatch between the climatology of the coupled model system and the data set used for initialization. While such drifts may be easily removed from the prediction system when analyzing individual variables, a major problem prevails for multivariate issues and, especially, when the output of the global prediction system shall be used for dynamical downscaling. In this study, we present a statistical approach to remove climate drifts in a multivariate context and demonstrate the effect of this drift correction on regional climate model simulations over the Euro-Atlantic sector. The statistical approach is based on an empirical orthogonal function (EOF) analysis adapted to a very large data matrix. The climate drift emerges as a dramatic cooling trend in North Atlantic sea surface temperatures (SSTs) and is captured by the leading EOF of the multivariate output from the global prediction system, accounting for 7.7% of total variability. The SST cooling pattern also imposes drifts in various atmospheric variables and levels. The removal of the first EOF effectuates the drift correction while retaining other components of intra-annual, inter-annual and decadal variability. In the regional climate model, the multivariate drift correction of the input data removes the cooling trends in most western European land regions and systematically reduces the discrepancy between the output of the regional climate model and observational data. In contrast, removing the drift only in the SST field from the global model has hardly any positive effect on the regional climate model.

  11. Uncertainty and variability in computational and mathematical models of cardiac physiology.

    PubMed

    Mirams, Gary R; Pathmanathan, Pras; Gray, Richard A; Challenor, Peter; Clayton, Richard H

    2016-12-01

    Mathematical and computational models of cardiac physiology have been an integral component of cardiac electrophysiology since its inception, and are collectively known as the Cardiac Physiome. We identify and classify the numerous sources of variability and uncertainty in model formulation, parameters and other inputs that arise from both natural variation in experimental data and lack of knowledge. The impact of uncertainty on the outputs of Cardiac Physiome models is not well understood, and this limits their utility as clinical tools. We argue that incorporating variability and uncertainty should be a high priority for the future of the Cardiac Physiome. We suggest investigating the adoption of approaches developed in other areas of science and engineering while recognising unique challenges for the Cardiac Physiome; it is likely that novel methods will be necessary that require engagement with the mathematics and statistics community. The Cardiac Physiome effort is one of the most mature and successful applications of mathematical and computational modelling for describing and advancing the understanding of physiology. After five decades of development, physiological cardiac models are poised to realise the promise of translational research via clinical applications such as drug development and patient-specific approaches as well as ablation, cardiac resynchronisation and contractility modulation therapies. For models to be included as a vital component of the decision process in safety-critical applications, rigorous assessment of model credibility will be required. This White Paper describes one aspect of this process by identifying and classifying sources of variability and uncertainty in models as well as their implications for the application and development of cardiac models. We stress the need to understand and quantify the sources of variability and uncertainty in model inputs, and the impact of model structure and complexity and their consequences for predictive model outputs. We propose that the future of the Cardiac Physiome should include a probabilistic approach to quantify the relationship of variability and uncertainty of model inputs and outputs. © 2016 The Authors. The Journal of Physiology published by John Wiley & Sons Ltd on behalf of The Physiological Society.

  12. Detection and Attribution of Simulated Climatic Extreme Events and Impacts: High Sensitivity to Bias Correction

    NASA Astrophysics Data System (ADS)

    Sippel, S.; Otto, F. E. L.; Forkel, M.; Allen, M. R.; Guillod, B. P.; Heimann, M.; Reichstein, M.; Seneviratne, S. I.; Kirsten, T.; Mahecha, M. D.

    2015-12-01

    Understanding, quantifying and attributing the impacts of climatic extreme events and variability is crucial for societal adaptation in a changing climate. However, climate model simulations generated for this purpose typically exhibit pronounced biases in their output that hinders any straightforward assessment of impacts. To overcome this issue, various bias correction strategies are routinely used to alleviate climate model deficiencies most of which have been criticized for physical inconsistency and the non-preservation of the multivariate correlation structure. We assess how biases and their correction affect the quantification and attribution of simulated extremes and variability in i) climatological variables and ii) impacts on ecosystem functioning as simulated by a terrestrial biosphere model. Our study demonstrates that assessments of simulated climatic extreme events and impacts in the terrestrial biosphere are highly sensitive to bias correction schemes with major implications for the detection and attribution of these events. We introduce a novel ensemble-based resampling scheme based on a large regional climate model ensemble generated by the distributed weather@home setup[1], which fully preserves the physical consistency and multivariate correlation structure of the model output. We use extreme value statistics to show that this procedure considerably improves the representation of climatic extremes and variability. Subsequently, biosphere-atmosphere carbon fluxes are simulated using a terrestrial ecosystem model (LPJ-GSI) to further demonstrate the sensitivity of ecosystem impacts to the methodology of bias correcting climate model output. We find that uncertainties arising from bias correction schemes are comparable in magnitude to model structural and parameter uncertainties. The present study consists of a first attempt to alleviate climate model biases in a physically consistent way and demonstrates that this yields improved simulations of climate extremes and associated impacts. [1] http://www.climateprediction.net/weatherathome/

  13. The SYSGEN user package

    NASA Technical Reports Server (NTRS)

    Carlson, C. R.

    1981-01-01

    The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.

  14. On the Predictability of Northeast Monsoon Rainfall over South Peninsular India in General Circulation Models

    NASA Astrophysics Data System (ADS)

    Nair, Archana; Acharya, Nachiketa; Singh, Ankita; Mohanty, U. C.; Panda, T. C.

    2013-11-01

    In this study the predictability of northeast monsoon (Oct-Nov-Dec) rainfall over peninsular India by eight general circulation model (GCM) outputs was analyzed. These GCM outputs (forecasts for the whole season issued in September) were compared with high-resolution observed gridded rainfall data obtained from the India Meteorological Department for the period 1982-2010. Rainfall, interannual variability (IAV), correlation coefficients, and index of agreement were examined for the outputs of eight GCMs and compared with observation. It was found that the models are able to reproduce rainfall and IAV to different extents. The predictive power of GCMs was also judged by determining the signal-to-noise ratio and the external error variance; it was noted that the predictive power of the models was usually very low. To examine dominant modes of interannual variability, empirical orthogonal function (EOF) analysis was also conducted. EOF analysis of the models revealed they were capable of representing the observed precipitation variability to some extent. The teleconnection between the sea surface temperature (SST) and northeast monsoon rainfall was also investigated and results suggest that during OND the SST over the equatorial Indian Ocean, the Bay of Bengal, the central Pacific Ocean (over Nino3 region), and the north and south Atlantic Ocean enhances northeast monsoon rainfall. This observed phenomenon is only predicted by the CCM3v6 model.

  15. Method of operating a thermoelectric generator

    DOEpatents

    Reynolds, Michael G; Cowgill, Joshua D

    2013-11-05

    A method for operating a thermoelectric generator supplying a variable-load component includes commanding the variable-load component to operate at a first output and determining a first load current and a first load voltage to the variable-load component while operating at the commanded first output. The method also includes commanding the variable-load component to operate at a second output and determining a second load current and a second load voltage to the variable-load component while operating at the commanded second output. The method includes calculating a maximum power output of the thermoelectric generator from the determined first load current and voltage and the determined second load current and voltage, and commanding the variable-load component to operate at a third output. The commanded third output is configured to draw the calculated maximum power output from the thermoelectric generator.

  16. Analysis of sensitivity and uncertainty in an individual-based model of a threatened wildlife species

    Treesearch

    Bruce G. Marcot; Peter H. Singleton; Nathan H. Schumaker

    2015-01-01

    Sensitivity analysis—determination of how prediction variables affect response variables—of individual-based models (IBMs) are few but important to the interpretation of model output. We present sensitivity analysis of a spatially explicit IBM (HexSim) of a threatened species, the Northern Spotted Owl (NSO; Strix occidentalis caurina) in Washington...

  17. The Effect of Latent Binary Variables on the Uncertainty of the Prediction of a Dichotomous Outcome Using Logistic Regression Based Propensity Score Matching.

    PubMed

    Szekér, Szabolcs; Vathy-Fogarassy, Ágnes

    2018-01-01

    Logistic regression based propensity score matching is a widely used method in case-control studies to select the individuals of the control group. This method creates a suitable control group if all factors affecting the output variable are known. However, if relevant latent variables exist as well, which are not taken into account during the calculations, the quality of the control group is uncertain. In this paper, we present a statistics-based research in which we try to determine the relationship between the accuracy of the logistic regression model and the uncertainty of the dependent variable of the control group defined by propensity score matching. Our analyses show that there is a linear correlation between the fit of the logistic regression model and the uncertainty of the output variable. In certain cases, a latent binary explanatory variable can result in a relative error of up to 70% in the prediction of the outcome variable. The observed phenomenon calls the attention of analysts to an important point, which must be taken into account when deducting conclusions.

  18. Equivalent Sensor Radiance Generation and Remote Sensing from Model Parameters. Part 1; Equivalent Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, Galina; DaSilva, Arlindo M.; Norris, Peter M.; Platnick, Steven E.

    2013-01-01

    In this paper we describe a general procedure for calculating equivalent sensor radiances from variables output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint the algorithm takes explicit account of the model subgrid variability, in particular its description of the probably density function of total water (vapor and cloud condensate.) The equivalent sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies. We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products.) We focus on clouds and cloud/aerosol interactions, because they are very important to model development and improvement.

  19. Multi-sensor Cloud Retrieval Simulator and Remote Sensing from Model Parameters . Pt. 1; Synthetic Sensor Radiance Formulation; [Synthetic Sensor Radiance Formulation

    NASA Technical Reports Server (NTRS)

    Wind, G.; DaSilva, A. M.; Norris, P. M.; Platnick, S.

    2013-01-01

    In this paper we describe a general procedure for calculating synthetic sensor radiances from variable output from a global atmospheric forecast model. In order to take proper account of the discrepancies between model resolution and sensor footprint, the algorithm takes explicit account of the model subgrid variability, in particular its description of the probability density function of total water (vapor and cloud condensate.) The simulated sensor radiances are then substituted into an operational remote sensing algorithm processing chain to produce a variety of remote sensing products that would normally be produced from actual sensor output. This output can then be used for a wide variety of purposes such as model parameter verification, remote sensing algorithm validation, testing of new retrieval methods and future sensor studies.We show a specific implementation using the GEOS-5 model, the MODIS instrument and the MODIS Adaptive Processing System (MODAPS) Data Collection 5.1 operational remote sensing cloud algorithm processing chain (including the cloud mask, cloud top properties and cloud optical and microphysical properties products). We focus on clouds because they are very important to model development and improvement.

  20. System level analysis and control of manufacturing process variation

    DOEpatents

    Hamada, Michael S.; Martz, Harry F.; Eleswarpu, Jay K.; Preissler, Michael J.

    2005-05-31

    A computer-implemented method is implemented for determining the variability of a manufacturing system having a plurality of subsystems. Each subsystem of the plurality of subsystems is characterized by signal factors, noise factors, control factors, and an output response, all having mean and variance values. Response models are then fitted to each subsystem to determine unknown coefficients for use in the response models that characterize the relationship between the signal factors, noise factors, control factors, and the corresponding output response having mean and variance values that are related to the signal factors, noise factors, and control factors. The response models for each subsystem are coupled to model the output of the manufacturing system as a whole. The coefficients of the fitted response models are randomly varied to propagate variances through the plurality of subsystems and values of signal factors and control factors are found to optimize the output of the manufacturing system to meet a specified criterion.

  1. An integrated prediction and optimization model of biogas production system at a wastewater treatment facility.

    PubMed

    Akbaş, Halil; Bilgen, Bilge; Turhan, Aykut Melih

    2015-11-01

    This study proposes an integrated prediction and optimization model by using multi-layer perceptron neural network and particle swarm optimization techniques. Three different objective functions are formulated. The first one is the maximization of methane percentage with single output. The second one is the maximization of biogas production with single output. The last one is the maximization of biogas quality and biogas production with two outputs. Methane percentage, carbon dioxide percentage, and other contents' percentage are used as the biogas quality criteria. Based on the formulated models and data from a wastewater treatment facility, optimal values of input variables and their corresponding maximum output values are found out for each model. It is expected that the application of the integrated prediction and optimization models increases the biogas production and biogas quality, and contributes to the quantity of electricity production at the wastewater treatment facility. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    USGS Publications Warehouse

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  3. The CESM Large Ensemble Project: Inspiring New Ideas and Understanding

    NASA Astrophysics Data System (ADS)

    Kay, J. E.; Deser, C.

    2016-12-01

    While internal climate variability is known to affect climate projections, its influence is often underappreciated and confused with model error. Why? In general, modeling centers contribute a small number of realizations to international climate model assessments [e.g., phase 5 of the Coupled Model Intercomparison Project (CMIP5)]. As a result, model error and internal climate variability are difficult, and at times impossible, to disentangle. In response, the Community Earth System Model (CESM) community designed the CESM Large Ensemble (CESM-LE) with the explicit goal of enabling assessment of climate change in the presence of internal climate variability. All CESM-LE simulations use a single CMIP5 model (CESM with the Community Atmosphere Model, version 5). The core simulations replay the twenty to twenty-first century (1920-2100) 40+ times under historical and representative concentration pathway 8.5 external forcing with small initial condition differences. Two companion 2000+-yr-long preindustrial control simulations (fully coupled, prognostic atmosphere and land only) allow assessment of internal climate variability in the absence of climate change. Comprehensive outputs, including many daily fields, are available as single-variable time series on the Earth System Grid for anyone to use. Examples of scientists and stakeholders that are using the CESM-LE outputs to help interpret the observational record, to understand projection spread and to plan for a range of possible futures influenced by both internal climate variability and forced climate change will be highlighted the presentation.

  4. Spatiotemporal prediction of fine particulate matter during the 2008 northern California wildfires using machine learning.

    PubMed

    Reid, Colleen E; Jerrett, Michael; Petersen, Maya L; Pfister, Gabriele G; Morefield, Philip E; Tager, Ira B; Raffuse, Sean M; Balmes, John R

    2015-03-17

    Estimating population exposure to particulate matter during wildfires can be difficult because of insufficient monitoring data to capture the spatiotemporal variability of smoke plumes. Chemical transport models (CTMs) and satellite retrievals provide spatiotemporal data that may be useful in predicting PM2.5 during wildfires. We estimated PM2.5 concentrations during the 2008 northern California wildfires using 10-fold cross-validation (CV) to select an optimal prediction model from a set of 11 statistical algorithms and 29 predictor variables. The variables included CTM output, three measures of satellite aerosol optical depth, distance to the nearest fires, meteorological data, and land use, traffic, spatial location, and temporal characteristics. The generalized boosting model (GBM) with 29 predictor variables had the lowest CV root mean squared error and a CV-R2 of 0.803. The most important predictor variable was the Geostationary Operational Environmental Satellite Aerosol/Smoke Product (GASP) Aerosol Optical Depth (AOD), followed by the CTM output and distance to the nearest fire cluster. Parsimonious models with various combinations of fewer variables also predicted PM2.5 well. Using machine learning algorithms to combine spatiotemporal data from satellites and CTMs can reliably predict PM2.5 concentrations during a major wildfire event.

  5. Downscaling GCM Output with Genetic Programming Model

    NASA Astrophysics Data System (ADS)

    Shi, X.; Dibike, Y. B.; Coulibaly, P.

    2004-05-01

    Climate change impact studies on watershed hydrology require reliable data at appropriate spatial and temporal resolution. However, the outputs of the current global climate models (GCMs) cannot be used directly because GCM do not provide hourly or daily precipitation and temperature reliable enough for hydrological modeling. Nevertheless, we can get more reliable data corresponding to future climate scenarios derived from GCM outputs using the so called 'downscaling techniques'. This study applies Genetic Programming (GP) based technique to downscale daily precipitation and temperature values at the Chute-du-Diable basin of the Saguenay watershed in Canada. In applying GP downscaling technique, the objective is to find a relationship between the large-scale predictor variables (NCEP data which provide daily information concerning the observed large-scale state of the atmosphere) and the predictand (meteorological data which describes conditions at the site scale). The selection of the most relevant predictor variables is achieved using the Pearson's coefficient of determination ( R2) (between the large-scale predictor variables and the daily meteorological data). In this case, the period (1961 - 2000) is identified to represent the current climate condition. For the forty years of data, the first 30 years (1961-1990) are considered for calibrating the models while the remaining ten years of data (1991-2000) are used to validate those models. In general, the R2 between the predictor variables and each predictand is very low in case of precipitation compared to that of maximum and minimum temperature. Moreover, the strength of individual predictors varies for every month and for each GP grammar. Therefore, the most appropriate combination of predictors has to be chosen by looking at the output analysis of all the twelve months and the different GP grammars. During the calibration of the GP model for precipitation downscaling, in addition to the mean daily precipitation and daily precipitation variability for each month, monthly average dry and wet-spell lengths are also considered as performance criteria. For the cases of Tmax and Tmin, means and variances of these variables corresponding to each month were considered as performance criteria. The GP downscaling results show satisfactory agreement between the observed daily temperature (Tmax and Tmin) and the simulated temperature. However, the downscaling results for the daily precipitation still require some improvement - suggesting further investigation of other grammars. KEY WORDS: Climate change; GP downscaling; GCM.

  6. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  7. Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number

    NASA Technical Reports Server (NTRS)

    Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-01-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  8. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    DOE PAGES

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...

    2016-05-16

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less

  9. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-05-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  10. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  11. Specification of ISS Plasma Environment Variability

    NASA Technical Reports Server (NTRS)

    Minow, Joseph I.; Neergaard, Linda F.; Bui, Them H.; Mikatarian, Ronald R.; Barsamian, H.; Koontz, Steven L.

    2004-01-01

    Quantifying spacecraft charging risks and associated hazards for the International Space Station (ISS) requires a plasma environment specification for the natural variability of ionospheric temperature (Te) and density (Ne). Empirical ionospheric specification and forecast models such as the International Reference Ionosphere (IRI) model typically only provide long term (seasonal) mean Te and Ne values for the low Earth orbit environment. This paper describes a statistical analysis of historical ionospheric low Earth orbit plasma measurements from the AE-C, AE-D, and DE-2 satellites used to derive a model of deviations of observed data values from IRI-2001 estimates of Ne, Te parameters for each data point to provide a statistical basis for modeling the deviations of the plasma environment from the IRI model output. Application of the deviation model with the IRI-2001 output yields a method for estimating extreme environments for the ISS spacecraft charging analysis.

  12. Hydrologic response to multimodel climate output using a physically based model of groundwater/surface water interactions

    NASA Astrophysics Data System (ADS)

    Sulis, M.; Paniconi, C.; Marrocu, M.; Huard, D.; Chaumont, D.

    2012-12-01

    General circulation models (GCMs) are the primary instruments for obtaining projections of future global climate change. Outputs from GCMs, aided by dynamical and/or statistical downscaling techniques, have long been used to simulate changes in regional climate systems over wide spatiotemporal scales. Numerous studies have acknowledged the disagreements between the various GCMs and between the different downscaling methods designed to compensate for the mismatch between climate model output and the spatial scale at which hydrological models are applied. Very little is known, however, about the importance of these differences once they have been input or assimilated by a nonlinear hydrological model. This issue is investigated here at the catchment scale using a process-based model of integrated surface and subsurface hydrologic response driven by outputs from 12 members of a multimodel climate ensemble. The data set consists of daily values of precipitation and min/max temperatures obtained by combining four regional climate models and five GCMs. The regional scenarios were downscaled using a quantile scaling bias-correction technique. The hydrologic response was simulated for the 690 km2des Anglais catchment in southwestern Quebec, Canada. The results show that different hydrological components (river discharge, aquifer recharge, and soil moisture storage) respond differently to precipitation and temperature anomalies in the multimodel climate output, with greater variability for annual discharge compared to recharge and soil moisture storage. We also find that runoff generation and extreme event-driven peak hydrograph flows are highly sensitive to any uncertainty in climate data. Finally, the results show the significant impact of changing sequences of rainy days on groundwater recharge fluxes and the influence of longer dry spells in modifying soil moisture spatial variability.

  13. Sensitivity analysis of a short distance atmospheric dispersion model applied to the Fukushima disaster

    NASA Astrophysics Data System (ADS)

    Périllat, Raphaël; Girard, Sylvain; Korsakissok, Irène; Mallet, Vinien

    2015-04-01

    In a previous study, the sensitivity of a long distance model was analyzed on the Fukushima Daiichi disaster case with the Morris screening method. It showed that a few variables, such as horizontal diffusion coefficient or clouds thickness, have a weak influence on most of the chosen outputs. The purpose of the present study is to apply a similar methodology on the IRSN's operational short distance atmospheric dispersion model, called pX. Atmospheric dispersion models are very useful in case of accidental releases of pollutant to minimize the population exposure during the accident and to obtain an accurate assessment of short and long term environmental and sanitary impact. Long range models are mostly used for consequences assessment while short range models are more adapted to the early phases of the crisis and are used to make prognosis. The Morris screening method was used to estimate the sensitivity of a set of outputs and to rank the inputs by their influences. The input ranking is highly dependent on the considered output, but a few variables seem to have a weak influence on most of them. This first step revealed that interactions and non-linearity are much more pronounced with the short range model than with the long range one. Afterward, the Sobol screening method was used to obtain more quantitative results on the same set of outputs. Using this method was possible for the short range model because it is far less computationally demanding than the long range model. The study also confronts two parameterizations, Doury's and Pasquill's models, to contrast their behavior. The Doury's model seems to excessively inflate the influence of some inputs compared to the Pasquill's model, such as the altitude of emission and the air stability which do not have the same role in the two models. The outputs of the long range model were dominated by only a few inputs. On the contrary, in this study the influence is shared more evenly between the inputs.

  14. Use of Regional Climate Model Output for Hydrologic Simulations

    NASA Astrophysics Data System (ADS)

    Hay, L. E.; Clark, M. P.; Wilby, R. L.; Gutowski, W. J.; Leavesley, G. H.; Pan, Z.; Arritt, R. W.; Takle, E. S.

    2001-12-01

    Daily precipitation and maximum and minimum temperature time series from a Regional Climate Model (RegCM2) were used as input to a distributed hydrologic model for a rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango, Colorado; East Fork of the Carson River near Gardnerville, Nevada; and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily data sets of precipitation and maximum and minimum temperature were developed from measured data. These datasets included precipitation and temperature data for all stations that are located within the area of the RegCM2 model output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and station data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and station-based simulations of runoff show little skill on a daily basis (Nash-Sutcliffe (NS) values ranging from 0.05-0.37 for RegCM2 and -0.08-0.65 for station). When the precipitation and temperature biases are corrected in the RegCM2 output and station data sets (Bias-RegCM2 and Bias-station, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins. In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from -0.08 to 0.72). These results indicate that the resolution of the RegCM2 output is appropriate for basin-scale modeling, but RegCM2 model output does not contain the day-to-day variability needed for basin-scale modeling in rainfall-dominated basins. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  15. Output variability across animals and levels in a motor system

    PubMed Central

    Norris, Brian J; Günay, Cengiz; Kueh, Daniel

    2018-01-01

    Rhythmic behaviors vary across individuals. We investigated the sources of this output variability across a motor system, from the central pattern generator (CPG) to the motor plant. In the bilaterally symmetric leech heartbeat system, the CPG orchestrates two coordinations in the bilateral hearts with different intersegmental phase relations (Δϕ) and periodic side-to-side switches. Population variability is large. We show that the system is precise within a coordination, that differences in repetitions of a coordination contribute little to population output variability, but that differences between bilaterally homologous cells may contribute to some of this variability. Nevertheless, much output variability is likely associated with genetic and life history differences among individuals. Variability of Δϕ were coordination-specific: similar at all levels in one, but significantly lower for the motor pattern than the CPG pattern in the other. Mechanisms that transform CPG output to motor neurons may limit output variability in the motor pattern. PMID:29345614

  16. Climate Expressions in Cellulose Isotopologues Over the Southeast Asian Monsoon Domain

    NASA Astrophysics Data System (ADS)

    Herzog, M. G.; LeGrande, A. N.; Anchukaitis, K. J.

    2013-12-01

    Southeast Asia experiences a highly variant climate, strongly influenced by the Southeast Asian monsoon. Oxygen isotopes in the alpha cellulose of tree rings can be used as a proxy measure of climate, but it is not clear which parameter (precipitation, temperature, water vapor, etc) is the most influential. Earlier forward models using observed meteorological data have been successful simulating tree ring cellulose oxygen isotopes in the tropics. However, by creating a cellulose oxygen isotope model which uses input data from GISS ModelE climate runs, we are able to reduce model variability and integrate δ18O in tree ring cellulose over the entire monsoon domain for the past millennium. Simulated timescales of δ18O in cellulose show a consistent annual cycle, allowing confidence in the identification of interdecadal and interannual climate variability. By comparing paleoclimate data with Global Circulation Model (GCM) outputs and a forward tree cellulose δ18O model, this study explores how δ18O can be used as a proxy measure of the monsoon on both local and regional scales. Simulated δ18O in soil water and δ18O in water vapor were found to explain the most variability in the paleoclimate data. Precipitation amount and temperature held little significance. Our results suggest that δ18O in tree cellulose is most influenced by regional controls directly related to cellulose production. top: monthly modeled output for d18O cellulose center: annually averaged model output of d18O cellulose bottom: observed monthly paleoproxy data for d18O cellulose

  17. Modeling habitat distribution from organism occurrences and environmental data: Case study using anemonefishes and their sea anemone hosts

    USGS Publications Warehouse

    Guinotte, J.M.; Bartley, J.D.; Iqbal, A.; Fautin, D.G.; Buddemeier, R.W.

    2006-01-01

    We demonstrate the KGSMapper (Kansas Geological Survey Mapper), a straightforward, web-based biogeographic tool that uses environmental conditions of places where members of a taxon are known to occur to find other places containing suitable habitat for them. Using occurrence data for anemonefishes or their host sea anemones, and data for environmental parameters, we generated maps of suitable habitat for the organisms. The fact that the fishes are obligate symbionts of the anemones allowed us to validate the KGSMapper output: we were able to compare the inferred occurrence of the organism to that of the actual occurrence of its symbiont. Characterizing suitable habitat for these organisms in the Indo-West Pacific, the region where they naturally occur, can be used to guide conservation efforts, field work, etc.; defining suitable habitat for them in the Atlantic and eastern Pacific is relevant to identifying areas vulnerable to biological invasions. We advocate distinguishing between these 2 sorts of model output, terming the former maps of realized habitat and the latter maps of potential habitat. Creation of a niche model requires adding biotic data to the environmental data used for habitat maps: we included data on fish occurrences to infer anemone distribution and vice versa. Altering the selection of environmental variables allowed us to investigate which variables may exert the most influence on organism distribution. Adding variables does not necessarily improve precision of the model output. KGSMapper output distinguishes areas that fall within 1 standard deviation (SD) of the mean environmental variable values for places where members of the taxon occur, within 2 SD, and within the entire range of values; eliminating outliers or data known to be imprecise or inaccurate improved output precision mainly in the 2 SD range and beyond. Thus, KGSMapper is robust in the face of questionable data, offering the user a way to recognize and clean such data. It also functions well with sparse datasets. These features make it useful for biogeographic meta-analyses with the diverse, distributed datasets that are typical for marine organisms lacking direct commercial value. ?? Inter-Research 2006.

  18. Computer program for design analysis of radial-inflow turbines

    NASA Technical Reports Server (NTRS)

    Glassman, A. J.

    1976-01-01

    A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.

  19. ARCAS (ACACIA Regional Climate-data Access System) -- a Web Access System for Climate Model Data Access, Visualization and Comparison

    NASA Astrophysics Data System (ADS)

    Hakkarinen, C.; Brown, D.; Callahan, J.; hankin, S.; de Koningh, M.; Middleton-Link, D.; Wigley, T.

    2001-05-01

    A Web-based access system to climate model output data sets for intercomparison and analysis has been produced, using the NOAA-PMEL developed Live Access Server software as host server and Ferret as the data serving and visualization engine. Called ARCAS ("ACACIA Regional Climate-data Access System"), and publicly accessible at http://dataserver.ucar.edu/arcas, the site currently serves climate model outputs from runs of the NCAR Climate System Model for the 21st century, for Business as Usual and Stabilization of Greenhouse Gas Emission scenarios. Users can select, download, and graphically display single variables or comparisons of two variables from either or both of the CSM model runs, averaged for monthly, seasonal, or annual time resolutions. The time length of the averaging period, and the geographical domain for download and display, are fully selectable by the user. A variety of arithmetic operations on the data variables can be computed "on-the-fly", as defined by the user. Expansions of the user-selectable options for defining analysis options, and for accessing other DOD-compatible ("Distributed Ocean Data System-compatible") data sets, residing at locations other than the NCAR hardware server on which ARCAS operates, are planned for this year. These expansions are designed to allow users quick and easy-to-operate web-based access to the largest possible selection of climate model output data sets available throughout the world.

  20. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  1. Nine time steps: ultra-fast statistical consistency testing of the Community Earth System Model (pyCECT v3.0)

    NASA Astrophysics Data System (ADS)

    Milroy, Daniel J.; Baker, Allison H.; Hammerling, Dorit M.; Jessup, Elizabeth R.

    2018-02-01

    The Community Earth System Model Ensemble Consistency Test (CESM-ECT) suite was developed as an alternative to requiring bitwise identical output for quality assurance. This objective test provides a statistical measurement of consistency between an accepted ensemble created by small initial temperature perturbations and a test set of CESM simulations. In this work, we extend the CESM-ECT suite with an inexpensive and robust test for ensemble consistency that is applied to Community Atmospheric Model (CAM) output after only nine model time steps. We demonstrate that adequate ensemble variability is achieved with instantaneous variable values at the ninth step, despite rapid perturbation growth and heterogeneous variable spread. We refer to this new test as the Ultra-Fast CAM Ensemble Consistency Test (UF-CAM-ECT) and demonstrate its effectiveness in practice, including its ability to detect small-scale events and its applicability to the Community Land Model (CLM). The new ultra-fast test facilitates CESM development, porting, and optimization efforts, particularly when used to complement information from the original CESM-ECT suite of tools.

  2. Rice production model based on the concept of ecological footprint

    NASA Astrophysics Data System (ADS)

    Faiz, S. A.; Wicaksono, A. D.; Dinanti, D.

    2017-06-01

    Pursuant to what had been stated in Region Spatial Planning (RTRW) of Malang Regency for period 2010-2030, Malang Regency was considered as the center of agricultural development, including districts bordered with Malang City. To protect the region functioning as the provider of rice production, then the policy of sustainable food farming-land (LP2B) was made which its implementation aims to protect rice-land. In the existing condition, LP2B system was not maximally executed, and it caused a limited extend of rice-land to deliver rice production output. One cause related with the development of settlements and industries due to the effect of Malang City that converted land-function. Location of research focused on 30 villages with direct border with Malang City. Review was conducted to develop a model of relation between farming production output and ecological footprint variables. These variables include rice-land area (X1), built land percentage (X2), and number of farmers (X3). Analysis technique was regression. Result of regression indicated that the model of rice production output Y=-207,983 + 10.246X1. Rice-land area (X1) was the most influential independent variable. It was concluded that of villages directly bordered with Malang City, there were 11 villages with higher production potential because their rice production yield was more than 1,000 tons/year, while 12 villages were threatened with low production output because its rice production yield only attained 500 tons/year. Based on the model and the spatial direction of RTRW, it can be said that the direction for the farming development policy must be redesigned to maintain rice-land area on the regions on which agricultural activity was still dominant. Because rice-land area was the most influential factor to farming production. Therefore, the wider the rice-land is, the higher rice production output is on each village.

  3. Dynamic modeling and parameter estimation of a radial and loop type distribution system network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jun Qui; Heng Chen; Girgis, A.A.

    1993-05-01

    This paper presents a new identification approach to three-phase power system modeling and model reduction taking power system network as multi-input, multi-output (MIMO) processes. The model estimate can be obtained in discrete-time input-output form, discrete- or continuous-time state-space variable form, or frequency-domain impedance transfer function matrix form. An algorithm for determining the model structure of this MIMO process is described. The effect of measurement noise on the approach is also discussed. This approach has been applied on a sample system and simulation results are also presented in this paper.

  4. AQUATOX Frequently Asked Questions

    EPA Pesticide Factsheets

    Capabilities, Installation, Source Code, Example Study Files, Biotic State Variables, Initial Conditions, Loadings, Volume, Sediments, Parameters, Libraries, Ecotoxicology, Waterbodies, Link to Watershed Models, Output, Metals, Troubleshooting

  5. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  6. An improved standardization procedure to remove systematic low frequency variability biases in GCM simulations

    NASA Astrophysics Data System (ADS)

    Mehrotra, Rajeshwar; Sharma, Ashish

    2012-12-01

    The quality of the absolute estimates of general circulation models (GCMs) calls into question the direct use of GCM outputs for climate change impact assessment studies, particularly at regional scales. Statistical correction of GCM output is often necessary when significant systematic biasesoccur between the modeled output and observations. A common procedure is to correct the GCM output by removing the systematic biases in low-order moments relative to observations or to reanalysis data at daily, monthly, or seasonal timescales. In this paper, we present an extension of a recently published nested bias correction (NBC) technique to correct for the low- as well as higher-order moments biases in the GCM-derived variables across selected multiple time-scales. The proposed recursive nested bias correction (RNBC) approach offers an improved basis for applying bias correction at multiple timescales over the original NBC procedure. The method ensures that the bias-corrected series exhibits improvements that are consistently spread over all of the timescales considered. Different variations of the approach starting from the standard NBC to the more complex recursive alternatives are tested to assess their impacts on a range of GCM-simulated atmospheric variables of interest in downscaling applications related to hydrology and water resources. Results of the study suggest that three to five iteration RNBCs are the most effective in removing distributional and persistence related biases across the timescales considered.

  7. A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models

    NASA Astrophysics Data System (ADS)

    Brugnach, M.; Neilson, R.; Bolte, J.

    2001-12-01

    The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.

  8. Model-free adaptive control of supercritical circulating fluidized-bed boilers

    DOEpatents

    Cheng, George Shu-Xing; Mulkey, Steven L

    2014-12-16

    A novel 3-Input-3-Output (3.times.3) Fuel-Air Ratio Model-Free Adaptive (MFA) controller is introduced, which can effectively control key process variables including Bed Temperature, Excess O2, and Furnace Negative Pressure of combustion processes of advanced boilers. A novel 7-input-7-output (7.times.7) MFA control system is also described for controlling a combined 3-Input-3-Output (3.times.3) process of Boiler-Turbine-Generator (BTG) units and a 5.times.5 CFB combustion process of advanced boilers. Those boilers include Circulating Fluidized-Bed (CFB) Boilers and Once-Through Supercritical Circulating Fluidized-Bed (OTSC CFB) Boilers.

  9. QFT Multi-Input, Multi-Output Design with Non-Diagonal, Non-Square Compensation Matrices

    NASA Technical Reports Server (NTRS)

    Hess, R. A.; Henderson, D. K.

    1996-01-01

    A technique for obtaining a non-diagonal compensator for the control of a multi-input, multi-output plant is presented. The technique, which uses Quantitative Feedback Theory, provides guaranteed stability and performance robustness in the presence of parametric uncertainty. An example is given involving the lateral-directional control of an uncertain model of a high-performance fighter aircraft in which redundant control effectors are in evidence, i.e. more control effectors than output variables are used.

  10. Power Quality Control and Design of Power Converter for Variable-Speed Wind Energy Conversion System with Permanent-Magnet Synchronous Generator

    PubMed Central

    Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin

    2013-01-01

    The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%. PMID:24453905

  11. Power quality control and design of power converter for variable-speed wind energy conversion system with permanent-magnet synchronous generator.

    PubMed

    Oğuz, Yüksel; Güney, İrfan; Çalık, Hüseyin

    2013-01-01

    The control strategy and design of an AC/DC/AC IGBT-PMW power converter for PMSG-based variable-speed wind energy conversion systems (VSWECS) operation in grid/load-connected mode are presented. VSWECS consists of a PMSG connected to a AC-DC IGBT-based PWM rectifier and a DC/AC IGBT-based PWM inverter with LCL filter. In VSWECS, AC/DC/AC power converter is employed to convert the variable frequency variable speed generator output to the fixed frequency fixed voltage grid. The DC/AC power conversion has been managed out using adaptive neurofuzzy controlled inverter located at the output of controlled AC/DC IGBT-based PWM rectifier. In this study, the dynamic performance and power quality of the proposed power converter connected to the grid/load by output LCL filter is focused on. Dynamic modeling and control of the VSWECS with the proposed power converter is performed by using MATLAB/Simulink. Simulation results show that the output voltage, power, and frequency of VSWECS reach to desirable operation values in a very short time. In addition, when PMSG based VSWECS works continuously with the 4.5 kHz switching frequency, the THD rate of voltage in the load terminal is 0.00672%.

  12. A comparison of two multi-variable integrator windup protection schemes

    NASA Technical Reports Server (NTRS)

    Mattern, Duane

    1993-01-01

    Two methods are examined for limit and integrator wind-up protection for multi-input, multi-output linear controllers subject to actuator constraints. The methods begin with an existing linear controller that satisfies the specifications for the nominal, small perturbation, linear model of the plant. The controllers are formulated to include an additional contribution to the state derivative calculations. The first method to be examined is the multi-variable version of the single-input, single-output, high gain, Conventional Anti-Windup (CAW) scheme. Except for the actuator limits, the CAW scheme is linear. The second scheme to be examined, denoted the Modified Anti-Windup (MAW) scheme, uses a scalar to modify the magnitude of the controller output vector while maintaining the vector direction. The calculation of the scalar modifier is a nonlinear function of the controller outputs and the actuator limits. In both cases the constrained actuator is tracked. These two integrator windup protection methods are demonstrated on a turbofan engine control system with five measurements, four control variables, and four actuators. The closed-loop responses of the two schemes are compared and contrasted during limit operation. The issue of maintaining the direction of the controller output vector using the Modified Anti-Windup scheme is discussed and the advantages and disadvantages of both of the IWP methods are presented.

  13. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  14. Mars Global Reference Atmospheric Model (Mars-GRAM 3.34): Programmer's Guide

    NASA Technical Reports Server (NTRS)

    Justus, C. G.; James, Bonnie F.; Johnson, Dale L.

    1996-01-01

    This is a programmer's guide for the Mars Global Reference Atmospheric Model (Mars-GRAM 3.34). Included are a brief history and review of the model since its origin in 1988 and a technical discussion of recent additions and modifications. Examples of how to run both the interactive and batch (subroutine) forms are presented. Instructions are provided on how to customize output of the model for various parameters of the Mars atmosphere. Detailed descriptions are given of the main driver programs, subroutines, and associated computational methods. Lists and descriptions include input, output, and local variables in the programs. These descriptions give a summary of program steps and 'map' of calling relationships among the subroutines. Definitions are provided for the variables passed between subroutines through common lists. Explanations are provided for all diagnostic and progress messages generated during execution of the program. A brief outline of future plans for Mars-GRAM is also presented.

  15. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  16. Development of Probabilistic Flood Inundation Mapping For Flooding Induced by Dam Failure

    NASA Astrophysics Data System (ADS)

    Tsai, C.; Yeh, J. J. J.

    2017-12-01

    A primary function of flood inundation mapping is to forecast flood hazards and assess potential losses. However, uncertainties limit the reliability of inundation hazard assessments. Major sources of uncertainty should be taken into consideration by an optimal flood management strategy. This study focuses on the 20km reach downstream of the Shihmen Reservoir in Taiwan. A dam failure induced flood herein provides the upstream boundary conditions of flood routing. The two major sources of uncertainty that are considered in the hydraulic model and the flood inundation mapping herein are uncertainties in the dam break model and uncertainty of the roughness coefficient. The perturbance moment method is applied to a dam break model and the hydro system model to develop probabilistic flood inundation mapping. Various numbers of uncertain variables can be considered in these models and the variability of outputs can be quantified. The probabilistic flood inundation mapping for dam break induced floods can be developed with consideration of the variability of output using a commonly used HEC-RAS model. Different probabilistic flood inundation mappings are discussed and compared. Probabilistic flood inundation mappings are hoped to provide new physical insights in support of the evaluation of concerning reservoir flooded areas.

  17. Spatial Statistical and Modeling Strategy for Inventorying and Monitoring Ecosystem Resources at Multiple Scales and Resolution Levels

    Treesearch

    Robin M. Reich; C. Aguirre-Bravo; M.S. Williams

    2006-01-01

    A statistical strategy for spatial estimation and modeling of natural and environmental resource variables and indicators is presented. This strategy is part of an inventory and monitoring pilot study that is being carried out in the Mexican states of Jalisco and Colima. Fine spatial resolution estimates of key variables and indicators are outputs that will allow the...

  18. Data Reduction Functions for the Langley 14- by 22-Foot Subsonic Tunnel

    NASA Technical Reports Server (NTRS)

    Boney, Andy D.

    2014-01-01

    The Langley 14- by 22-Foot Subsonic Tunnel's data reduction software utilizes six major functions to compute the acquired data. These functions calculate engineering units, tunnel parameters, flowmeters, jet exhaust measurements, balance loads/model attitudes, and model /wall pressures. The input (required) variables, the output (computed) variables, and the equations and/or subfunction(s) associated with each major function are discussed.

  19. Study of a control strategy for grid side converter in doubly- fed wind power system

    NASA Astrophysics Data System (ADS)

    Zhu, D. J.; Tan, Z. L.; Yuan, F.; Wang, Q. Y.; Ding, M.

    2016-08-01

    The grid side converter is an important part of the excitation system of doubly-fed asynchronous generator used in wind power system. As a three-phase voltage source PWM converter, it can not only transfer slip power in the form of active power, but also adjust the reactive power of the grid. This paper proposed a control approach for improving its performance. In this control approach, the dc voltage is regulated by a sliding mode variable structure control scheme and current by a variable structure controller based on the input output linearization. The theoretical bases of the sliding mode variable structure control were introduced, and the stability proof was presented. Switching function of the system has been deduced, sliding mode voltage controller model has been established, and the output of the outer voltage loop is the instruction of the inner current loop. Affine nonlinear model of two input two output equations on d-q axis for current has been established its meeting conditions of exact linearization were proved. In order to improve the anti-jamming capability of the system, a variable structure control was added in the current controller, the control law was deduced. The dual-loop control with sliding mode control in outer voltage loop and linearization variable structure control in inner current loop was proposed. Simulation results demonstrate the effectiveness of the proposed control strategy even during the dc reference voltage and system load variation.

  20. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  1. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  2. Unified Deep Learning Architecture for Modeling Biology Sequence.

    PubMed

    Wu, Hongjie; Cao, Chengyuan; Xia, Xiaoyan; Lu, Qiang

    2017-10-09

    Prediction of the spatial structure or function of biological macromolecules based on their sequence remains an important challenge in bioinformatics. When modeling biological sequences using traditional sequencing models, characteristics, such as long-range interactions between basic units, the complicated and variable output of labeled structures, and the variable length of biological sequences, usually lead to different solutions on a case-by-case basis. This study proposed the use of bidirectional recurrent neural networks based on long short-term memory or a gated recurrent unit to capture long-range interactions by designing the optional reshape operator to adapt to the diversity of the output labels and implementing a training algorithm to support the training of sequence models capable of processing variable-length sequences. Additionally, the merge and pooling operators enhanced the ability to capture short-range interactions between basic units of biological sequences. The proposed deep-learning model and its training algorithm might be capable of solving currently known biological sequence-modeling problems through the use of a unified framework. We validated our model on one of the most difficult biological sequence-modeling problems currently known, with our results indicating the ability of the model to obtain predictions of protein residue interactions that exceeded the accuracy of current popular approaches by 10% based on multiple benchmarks.

  3. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  4. Filtering and Gridding Satellite Observations of Cloud Variables to Compare with Climate Model Output

    NASA Astrophysics Data System (ADS)

    Pitts, K.; Nasiri, S. L.; Smith, N.

    2013-12-01

    Global climate models have improved considerably over the years, yet clouds still represent a large factor of uncertainty for these models. Comparisons of model-simulated cloud variables with equivalent satellite cloud products are the best way to start diagnosing the differences between model output and observations. Gridded (level 3) cloud products from many different satellites and instruments are required for a full analysis, but these products are created by different science teams using different algorithms and filtering criteria to create similar, but not directly comparable, cloud products. This study makes use of a recently developed uniform space-time gridding algorithm to create a new set of gridded cloud products from each satellite instrument's level 2 data of interest which are each filtered using the same criteria, allowing for a more direct comparison between satellite products. The filtering is done via several variables such as cloud top pressure/height, thermodynamic phase, optical properties, satellite viewing angle, and sun zenith angle. The filtering criteria are determined based on the variable being analyzed and the science question at hand. Each comparison of different variables may require different filtering strategies as no single approach is appropriate for all problems. Beyond inter-satellite data comparison, these new sets of uniformly gridded satellite products can also be used for comparison with model-simulated cloud variables. Of particular interest to this study are the differences in the vertical distributions of ice and liquid water content between the satellite retrievals and model simulations, especially in the mid-troposphere where there are mixed-phase clouds to consider. This presentation will demonstrate the proof of concept through comparisons of cloud water path from Aqua MODIS retrievals and NASA GISS-E2-[R/H] model simulations archived in the CMIP5 data portal.

  5. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  6. Multi input single output model predictive control of non-linear bio-polymerization process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arumugasamy, Senthil Kumar; Ahmad, Z.

    This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state spacemore » model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.« less

  7. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  8. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  9. Optimal cycling time trial position models: aerodynamics versus power output and metabolic energy.

    PubMed

    Fintelman, D M; Sterling, M; Hemida, H; Li, F-X

    2014-06-03

    The aerodynamic drag of a cyclist in time trial (TT) position is strongly influenced by the torso angle. While decreasing the torso angle reduces the drag, it limits the physiological functioning of the cyclist. Therefore the aims of this study were to predict the optimal TT cycling position as function of the cycling speed and to determine at which speed the aerodynamic power losses start to dominate. Two models were developed to determine the optimal torso angle: a 'Metabolic Energy Model' and a 'Power Output Model'. The Metabolic Energy Model minimised the required cycling energy expenditure, while the Power Output Model maximised the cyclists׳ power output. The input parameters were experimentally collected from 19 TT cyclists at different torso angle positions (0-24°). The results showed that for both models, the optimal torso angle depends strongly on the cycling speed, with decreasing torso angles at increasing speeds. The aerodynamic losses outweigh the power losses at cycling speeds above 46km/h. However, a fully horizontal torso is not optimal. For speeds below 30km/h, it is beneficial to ride in a more upright TT position. The two model outputs were not completely similar, due to the different model approaches. The Metabolic Energy Model could be applied for endurance events, while the Power Output Model is more suitable in sprinting or in variable conditions (wind, undulating course, etc.). It is suggested that despite some limitations, the models give valuable information about improving the cycling performance by optimising the TT cycling position. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overviewmore » of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.« less

  11. Self-organizing linear output map (SOLO): An artificial neural network suitable for hydrologic modeling and analysis

    NASA Astrophysics Data System (ADS)

    Hsu, Kuo-Lin; Gupta, Hoshin V.; Gao, Xiaogang; Sorooshian, Soroosh; Imam, Bisher

    2002-12-01

    Artificial neural networks (ANNs) can be useful in the prediction of hydrologic variables, such as streamflow, particularly when the underlying processes have complex nonlinear interrelationships. However, conventional ANN structures suffer from network training issues that significantly limit their widespread application. This paper presents a multivariate ANN procedure entitled self-organizing linear output map (SOLO), whose structure has been designed for rapid, precise, and inexpensive estimation of network structure/parameters and system outputs. More important, SOLO provides features that facilitate insight into the underlying processes, thereby extending its usefulness beyond forecast applications as a tool for scientific investigations. These characteristics are demonstrated using a classic rainfall-runoff forecasting problem. Various aspects of model performance are evaluated in comparison with other commonly used modeling approaches, including multilayer feedforward ANNs, linear time series modeling, and conceptual rainfall-runoff modeling.

  12. Obs4MIPS: Satellite Observations for Model Evaluation

    NASA Astrophysics Data System (ADS)

    Ferraro, R.; Waliser, D. E.; Gleckler, P. J.

    2017-12-01

    This poster will review the current status of the obs4MIPs project, whose purpose is to provide a limited collection of well-established and documented datasets for comparison with Earth system models (https://www.earthsystemcog.org/projects/obs4mips/). These datasets have been reformatted to correspond with the CMIP5 model output requirements, and include technical documentation specifically targeted for their use in model output evaluation. The project holdings now exceed 120 datasets with observations that directly correspond to CMIP5 model output variables, with new additions in response to the CMIP6 experiments. With the growth in climate model output data volume, it is increasing more difficult to bring the model output and the observations together to do evaluations. The positioning of the obs4MIPs datasets within the Earth System Grid Federation (ESGF) allows for the use of currently available and planned online tools within the ESGF to perform analysis using model output and observational datasets without necessarily downloading everything to a local workstation. This past year, obs4MIPs has updated its submission guidelines to closely align with changes in the CMIP6 experiments, and is implementing additional indicators and ancillary data to allow users to more easily determine the efficacy of an obs4MIPs dataset for specific evaluation purposes. This poster will present the new guidelines and indicators, and update the list of current obs4MIPs holdings and their connection to the ESGF evaluation and analysis tools currently available, and being developed for the CMIP6 experiments.

  13. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  15. User's instructions for the 41-node thermoregulatory model (steady state version)

    NASA Technical Reports Server (NTRS)

    Leonard, J. I.

    1974-01-01

    A user's guide for the steady-state thermoregulatory model is presented. The model was modified to provide conversational interaction on a remote terminal, greater flexibility for parameter estimation, increased efficiency of convergence, greater choice of output variable and more realistic equations for respiratory and skin diffusion water losses.

  16. Supervision of dynamic systems: Monitoring, decision-making and control

    NASA Technical Reports Server (NTRS)

    White, T. N.

    1982-01-01

    Effects of task variables on the performance of the human supervisor by means of modelling techniques are discussed. The task variables considered are: The dynamics of the system, the task to be performed, the environmental disturbances and the observation noise. A relationship between task variables and parameters of a supervisory model is assumed. The model consists of three parts: (1) The observer part is thought to be a full order optimal observer, (2) the decision-making part is stated as a set of decision rules, and (3) the controller part is given by a control law. The observer part generates, on the basis of the system output and the control actions, an estimate of the state of the system and its associated variance. The outputs of the observer part are then used by the decision-making part to determine the instants in time of the observation actions on the one hand and the controls actions on the other. The controller part makes use of the estimated state to derive the amplitude(s) of the control action(s).

  17. Low-warming Scenarios and their Approximation: Testing Emulation Performance for Average and Extreme Variables

    NASA Astrophysics Data System (ADS)

    Tebaldi, C.; Knutti, R.; Armbruster, A.

    2017-12-01

    Taking advantage of the availability of ensemble simulations under low-warming scenarios performed with NCAR-DOE CESM, we test the performance of established methods for climate model output emulation. The goal is to provide a green, yellow or red light to the large impact research community that may be interested in performing impact analysis using climate model output other than, or in conjunction with, CESM's, especially as the IPCC Special Report on the 1.5 target urgently calls for scientific contributions exploring the costs and benefits of attaining these ambitious goals. We test the performance of emulators of average temperature and precipitation - and their interannual variability - and we also explore the possibility of emulating indices of extremes (ETCCDI indices), devised to offer impact relevant information from daily output of temperature and precipitation. Different degrees of departure from the linearity assumed in these traditional emulation approaches are found across the various quantities considered, and across regions, highlighting different degrees of quality in the approximations, and therefore some challenges in the provision of climate change information for impact analysis under these new scenarios that not many models have thus far targeted through their simulations.

  18. Wind tunnel measurements of the power output variability and unsteady loading in a micro wind farm model

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael; Meneveau, Charles; Meyers, Johan

    2015-11-01

    To optimize wind farm layouts for a maximum power output and wind turbine lifetime, mean power output measurements in wind tunnel studies are not sufficient. Instead, detailed temporal information about the power output and unsteady loading from every single wind turbine in the wind farm is needed. A very small porous disc model with a realistic thrust coefficient of 0.75 - 0.85, was designed. The model is instrumented with a strain gage, allowing measurements of the thrust force, incoming velocity and power output with a frequency response up to the natural frequency of the model. This is shown by reproducing the -5/3 spectrum from the incoming flow. Thanks to its small size and compact instrumentation, the model allows wind tunnel studies of large wind turbine arrays with detailed temporal information from every wind turbine. Translating to field conditions with a length-scale ratio of 1:3,000 the frequencies studied from the data reach from 10-4 Hz up to about 6 .10-2 Hz. The model's capabilities are demonstrated with a large wind farm measurement consisting of close to 100 instrumented models. A high correlation is found between the power outputs of stream wise aligned wind turbines, which is in good agreement with results from prior LES simulations. Work supported by ERC (ActiveWindFarms, grant no. 306471) and by NSF (grants CBET-113380 and IIA-1243482, the WINDINSPIRE project).

  19. Global sensitivity and uncertainty analysis of an atmospheric chemistry transport model: the FRAME model (version 9.15.0) as a case study

    NASA Astrophysics Data System (ADS)

    Aleksankina, Ksenia; Heal, Mathew R.; Dore, Anthony J.; Van Oijen, Marcel; Reis, Stefan

    2018-04-01

    Atmospheric chemistry transport models (ACTMs) are widely used to underpin policy decisions associated with the impact of potential changes in emissions on future pollutant concentrations and deposition. It is therefore essential to have a quantitative understanding of the uncertainty in model output arising from uncertainties in the input pollutant emissions. ACTMs incorporate complex and non-linear descriptions of chemical and physical processes which means that interactions and non-linearities in input-output relationships may not be revealed through the local one-at-a-time sensitivity analysis typically used. The aim of this work is to demonstrate a global sensitivity and uncertainty analysis approach for an ACTM, using as an example the FRAME model, which is extensively employed in the UK to generate source-receptor matrices for the UK Integrated Assessment Model and to estimate critical load exceedances. An optimised Latin hypercube sampling design was used to construct model runs within ±40 % variation range for the UK emissions of SO2, NOx, and NH3, from which regression coefficients for each input-output combination and each model grid ( > 10 000 across the UK) were calculated. Surface concentrations of SO2, NOx, and NH3 (and of deposition of S and N) were found to be predominantly sensitive to the emissions of the respective pollutant, while sensitivities of secondary species such as HNO3 and particulate SO42-, NO3-, and NH4+ to pollutant emissions were more complex and geographically variable. The uncertainties in model output variables were propagated from the uncertainty ranges reported by the UK National Atmospheric Emissions Inventory for the emissions of SO2, NOx, and NH3 (±4, ±10, and ±20 % respectively). The uncertainties in the surface concentrations of NH3 and NOx and the depositions of NHx and NOy were dominated by the uncertainties in emissions of NH3, and NOx respectively, whilst concentrations of SO2 and deposition of SOy were affected by the uncertainties in both SO2 and NH3 emissions. Likewise, the relative uncertainties in the modelled surface concentrations of each of the secondary pollutant variables (NH4+, NO3-, SO42-, and HNO3) were due to uncertainties in at least two input variables. In all cases the spatial distribution of relative uncertainty was found to be geographically heterogeneous. The global methods used here can be applied to conduct sensitivity and uncertainty analyses of other ACTMs.

  20. Matrix completion by deep matrix factorization.

    PubMed

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Beyond R 0: Demographic Models for Variability of Lifetime Reproductive Output

    PubMed Central

    Caswell, Hal

    2011-01-01

    The net reproductive rate measures the expected lifetime reproductive output of an individual, and plays an important role in demography, ecology, evolution, and epidemiology. Well-established methods exist to calculate it from age- or stage-classified demographic data. As an expectation, provides no information on variability; empirical measurements of lifetime reproduction universally show high levels of variability, and often positive skewness among individuals. This is often interpreted as evidence of heterogeneity, and thus of an opportunity for natural selection. However, variability provides evidence of heterogeneity only if it exceeds the level of variability to be expected in a cohort of identical individuals all experiencing the same vital rates. Such comparisons require a way to calculate the statistics of lifetime reproduction from demographic data. Here, a new approach is presented, using the theory of Markov chains with rewards, obtaining all the moments of the distribution of lifetime reproduction. The approach applies to age- or stage-classified models, to constant, periodic, or stochastic environments, and to any kind of reproductive schedule. As examples, I analyze data from six empirical studies, of a variety of animal and plant taxa (nematodes, polychaetes, humans, and several species of perennial plants). PMID:21738586

  2. Does player unavailability affect football teams' match physical outputs? A two-season study of the UEFA champions league.

    PubMed

    Windt, Johann; Ekstrand, Jan; Khan, Karim M; McCall, Alan; Zumbo, Bruno D

    2018-05-01

    Player unavailability negatively affects team performance in elite football. However, whether player unavailability and its concomitant performance decrement is mediated by any changes in teams' match physical outputs is unknown. We examined whether the number of players injured (i.e. unavailable for match selection) was associated with any changes in teams' physical outputs. Prospective cohort study. Between-team variation was calculated by correlating average team availability with average physical outputs. Within-team variation was quantified using linear mixed modelling, using physical outputs - total distance, sprint count (efforts over 20km/h), and percent of distance covered at high speeds (>14km/h) - as outcome variables, and player unavailability as the independent variable of interest. To control for other factors that may influence match physical outputs, stage (group stage/knockout), venue (home/away), score differential, ball possession (%), team ranking (UEFA Club Coefficient), and average team age were all included as covariates. Teams' average player unavailability was positively associated with the average number of sprints they performed in matches across two seasons. Multilevel models similarly demonstrated that having 4 unavailable players was associated with 20.8 more sprints during matches in 2015/2016, and with an estimated 0.60-0.77% increase in the proportion of total distance run above 14km/h in both seasons. Player unavailability had a possibly positive and likely positive association with total match distances in the two respective seasons. Having more players injured and unavailable for match selection was associated with an increase in teams' match physical outputs. Copyright © 2017 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  3. Can the super model (SUMO) method improve hydrological simulations? Exploratory tests with the GR hydrological models

    NASA Astrophysics Data System (ADS)

    Santos, Léonard; Thirel, Guillaume; Perrin, Charles

    2017-04-01

    Errors made by hydrological models may come from a problem in parameter estimation, uncertainty on observed measurements, numerical problems and from the model conceptualization that simplifies the reality. Here we focus on this last issue of hydrological modeling. One of the solutions to reduce structural uncertainty is to use a multimodel method, taking advantage of the great number and the variability of existing hydrological models. In particular, because different models are not similarly good in all situations, using multimodel approaches can improve the robustness of modeled outputs. Traditionally, in hydrology, multimodel methods are based on the output of the model (the simulated flow series). The aim of this poster is to introduce a different approach based on the internal variables of the models. The method is inspired by the SUper MOdel (SUMO, van den Berge et al., 2011) developed for climatology. The idea of the SUMO method is to correct the internal variables of a model taking into account the values of the internal variables of (an)other model(s). This correction is made bilaterally between the different models. The ensemble of the different models constitutes a super model in which all the models exchange information on their internal variables with each other at each time step. Due to this continuity in the exchanges, this multimodel algorithm is more dynamic than traditional multimodel methods. The method will be first tested using two GR4J models (in a state-space representation) with different parameterizations. The results will be presented and compared to traditional multimodel methods that will serve as benchmarks. In the future, other rainfall-runoff models will be used in the super model. References van den Berge, L. A., Selten, F. M., Wiegerinck, W., and Duane, G. S. (2011). A multi-model ensemble method that combines imperfect models through learning. Earth System Dynamics, 2(1) :161-177.

  4. Pricing behavior of non-profit agencies. The case of blood products.

    PubMed

    Jacobs, P; Wilder, R P

    1984-04-01

    In this study we examine the pricing behavior of a non-profit agency, the American National Red Cross blood service units. Two alternative hypotheses are presented: one in which the agency maximizes profits,, and one in which output is maximized subject to a breakeven constraint. Following a general approach developed by Eckstein and Fromm , pricing equations for separate blood products are applied to cross-sectional data from Red Cross blood centers to determine the impact of demand, cost, competition, and subsidy variables. The impact of these variables, in particular the impact of the fixed subsidy on price, is shown to be consistent with the output-maximizing model.

  5. From GCM grid cell to agricultural plot: scale issues affecting modelling of climate impact

    PubMed Central

    Baron, Christian; Sultan, Benjamin; Balme, Maud; Sarr, Benoit; Traore, Seydou; Lebel, Thierry; Janicot, Serge; Dingkuhn, Michael

    2005-01-01

    General circulation models (GCM) are increasingly capable of making relevant predictions of seasonal and long-term climate variability, thus improving prospects of predicting impact on crop yields. This is particularly important for semi-arid West Africa where climate variability and drought threaten food security. Translating GCM outputs into attainable crop yields is difficult because GCM grid boxes are of larger scale than the processes governing yield, involving partitioning of rain among runoff, evaporation, transpiration, drainage and storage at plot scale. This study analyses the bias introduced to crop simulation when climatic data is aggregated spatially or in time, resulting in loss of relevant variation. A detailed case study was conducted using historical weather data for Senegal, applied to the crop model SARRA-H (version for millet). The study was then extended to a 10°N–17° N climatic gradient and a 31 year climate sequence to evaluate yield sensitivity to the variability of solar radiation and rainfall. Finally, a down-scaling model called LGO (Lebel–Guillot–Onibon), generating local rain patterns from grid cell means, was used to restore the variability lost by aggregation. Results indicate that forcing the crop model with spatially aggregated rainfall causes yield overestimations of 10–50% in dry latitudes, but nearly none in humid zones, due to a biased fraction of rainfall available for crop transpiration. Aggregation of solar radiation data caused significant bias in wetter zones where radiation was limiting yield. Where climatic gradients are steep, these two situations can occur within the same GCM grid cell. Disaggregation of grid cell means into a pattern of virtual synoptic stations having high-resolution rainfall distribution removed much of the bias caused by aggregation and gave realistic simulations of yield. It is concluded that coupling of GCM outputs with plot level crop models can cause large systematic errors due to scale incompatibility. These errors can be avoided by transforming GCM outputs, especially rainfall, to simulate the variability found at plot level. PMID:16433096

  6. Prediction model of sinoatrial node field potential using high order partial least squares.

    PubMed

    Feng, Yu; Cao, Hui; Zhang, Yanbin

    2015-01-01

    High order partial least squares (HOPLS) is a novel data processing method. It is highly suitable for building prediction model which has tensor input and output. The objective of this study is to build a prediction model of the relationship between sinoatrial node field potential and high glucose using HOPLS. The three sub-signals of the sinoatrial node field potential made up the model's input. The concentration and the actuation duration of high glucose made up the model's output. The results showed that on the premise of predicting two dimensional variables, HOPLS had the same predictive ability and a lower dispersion degree compared with partial least squares (PLS).

  7. Interval Predictor Models for Data with Measurement Uncertainty

    NASA Technical Reports Server (NTRS)

    Lacerda, Marcio J.; Crespo, Luis G.

    2017-01-01

    An interval predictor model (IPM) is a computational model that predicts the range of an output variable given input-output data. This paper proposes strategies for constructing IPMs based on semidefinite programming and sum of squares (SOS). The models are optimal in the sense that they yield an interval valued function of minimal spread containing all the observations. Two different scenarios are considered. The first one is applicable to situations where the data is measured precisely whereas the second one is applicable to data subject to known biases and measurement error. In the latter case, the IPMs are designed to fully contain regions in the input-output space where the data is expected to fall. Moreover, we propose a strategy for reducing the computational cost associated with generating IPMs as well as means to simulate them. Numerical examples illustrate the usage and performance of the proposed formulations.

  8. Signal to noise quantification of regional climate projections

    NASA Astrophysics Data System (ADS)

    Li, S.; Rupp, D. E.; Mote, P.

    2016-12-01

    One of the biggest challenges in interpreting climate model outputs for impacts studies and adaptation planning is understanding the sources of disagreement among models (which is often used imperfectly as a stand-in for system uncertainty). Internal variability is a primary source of uncertainty in climate projections, especially for precipitation, for which models disagree about even the sign of changes in large areas like the continental US. Taking advantage of a large initial-condition ensemble of regional climate simulations, this study quantifies the magnitude of changes forced by increasing greenhouse gas concentrations relative to internal variability. Results come from a large initial-condition ensemble of regional climate model simulations generated by weather@home, a citizen science computing platform, where the western United States climate was simulated for the recent past (1985-2014) and future (2030-2059) using a 25-km horizontal resolution regional climate model (HadRM3P) nested in global atmospheric model (HadAM3P). We quantify grid point level signal-to-noise not just in temperature and precipitation responses, but also the energy and moisture flux terms that are related to temperature and precipitation responses, to provide important insights regarding uncertainty in climate change projections at local and regional scales. These results will aid modelers in determining appropriate ensemble sizes for different climate variables and help users of climate model output with interpreting climate model projections.

  9. Model for the techno-economic analysis of common work of wind power and CCGT power plant to offer constant level of power in the electricity market

    NASA Astrophysics Data System (ADS)

    Tomsic, Z.; Rajsl, I.; Filipovic, M.

    2017-11-01

    Wind power varies over time, mainly under the influence of meteorological fluctuations. The variations occur on all time scales. Understanding these variations and their predictability is of key importance for the integration and optimal utilization of wind in the power system. There are two major attributes of variable generation that notably impact the participation on power exchanges: Variability (the output of variable generation changes and resulting in fluctuations in the plant output on all time scales) and Uncertainty (the magnitude and timing of variable generation output is less predictable, wind power output has low levels of predictability). Because of these variability and uncertainty wind plants cannot participate to electricity market, especially to power exchanges. For this purpose, the paper presents techno-economic analysis of work of wind plants together with combined cycle gas turbine (CCGT) plant as support for offering continues power to electricity market. A model of wind farms and CCGT plant was developed in program PLEXOS based on real hourly input data and all characteristics of CCGT with especial analysis of techno-economic characteristics of different types of starts and stops of the plant. The Model analyzes the followings: costs of different start-stop characteristics (hot, warm, cold start-ups and shutdowns) and part load performance of CCGT. Besides the costs, the technical restrictions were considered such as start-up time depending on outage duration, minimum operation time, and minimum load or peaking capability. For calculation purposes, the following parameters are necessary to know in order to be able to economically evaluate changes in the start-up process: ramp up and down rate, time of start time reduction, fuel mass flow during start, electricity production during start, variable cost of start-up process, cost and charges for life time consumption for each start and start type, remuneration during start up time regarding expected or unexpected starts, the cost and revenues for balancing energy (important when participating in electricity market), and the cost or revenues for CO2-certificates. Main motivation for this analysis is to investigate possibilities to participate on power exchanges by offering continues guarantied power from wind plants by backing-up them with CCGT power plant.

  10. A method for diagnosing surface parameters using geostationary satellite imagery and a boundary-layer model. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Polansky, A. C.

    1982-01-01

    A method for diagnosing surface parameters on a regional scale via geosynchronous satellite imagery is presented. Moisture availability, thermal inertia, atmospheric heat flux, and total evaporation are determined from three infrared images obtained from the Geostationary Operational Environmental Satellite (GOES). Three GOES images (early morning, midafternoon, and night) are obtained from computer tape. Two temperature-difference images are then created. The boundary-layer model is run, and its output is inverted via cubic regression equations. The satellite imagery is efficiently converted into output-variable fields. All computations are executed on a PDP 11/34 minicomputer. Output fields can be produced within one hour of the availability of aligned satellite subimages of a target area.

  11. Intercomparison of Downscaling Methods on Hydrological Impact for Earth System Model of NE United States

    NASA Astrophysics Data System (ADS)

    Yang, P.; Fekete, B. M.; Rosenzweig, B.; Lengyel, F.; Vorosmarty, C. J.

    2012-12-01

    Atmospheric dynamics are essential inputs to Regional-scale Earth System Models (RESMs). Variables including surface air temperature, total precipitation, solar radiation, wind speed and humidity must be downscaled from coarse-resolution, global General Circulation Models (GCMs) to the high temporal and spatial resolution required for regional modeling. However, this downscaling procedure can be challenging due to the need to correct for bias from the GCM and to capture the spatiotemporal heterogeneity of the regional dynamics. In this study, the results obtained using several downscaling techniques and observational datasets were compared for a RESM of the Northeast Corridor of the United States. Previous efforts have enhanced GCM model outputs through bias correction using novel techniques. For example, the Climate Impact Research at Potsdam Institute developed a series of bias-corrected GCMs towards the next generation climate change scenarios (Schiermeier, 2012; Moss et al., 2010). Techniques to better represent the heterogeneity of climate variables have also been improved using statistical approaches (Maurer, 2008; Abatzoglou, 2011). For this study, four downscaling approaches to transform bias-corrected HADGEM2-ES Model output (daily at .5 x .5 degree) to the 3'*3'(longitude*latitude) daily and monthly resolution required for the Northeast RESM were compared: 1) Bilinear Interpolation, 2) Daily bias-corrected spatial downscaling (D-BCSD) with Gridded Meteorological Datasets (developed by Abazoglou 2011), 3) Monthly bias-corrected spatial disaggregation (M-BCSD) with CRU(Climate Research Unit) and 4) Dynamic Downscaling based on Weather Research and Forecast (WRF) model. Spatio-temporal analysis of the variability in precipitation was conducted over the study domain. Validation of the variables of different downscaling methods against observational datasets was carried out for assessment of the downscaled climate model outputs. The effects of using the different approaches to downscale atmospheric variables (specifically air temperature and precipitation) for use as inputs to the Water Balance Model (WBMPlus, Vorosmarty et al., 1998;Wisser et al., 2008) for simulation of daily discharge and monthly stream flow in the Northeast US for a 100-year period in the 21st century were also assessed. Statistical techniques especially monthly bias-corrected spatial disaggregation (M-BCSD) showed potential advantage among other methods for the daily discharge and monthly stream flow simulation. However, Dynamic Downscaling will provide important complements to the statistical approaches tested.

  12. Documentation of the dynamic parameter, water-use, stream and lake flow routing, and two summary output modules and updates to surface-depression storage simulation and initial conditions specification options with the Precipitation-Runoff Modeling System (PRMS)

    USGS Publications Warehouse

    Regan, R. Steve; LaFontaine, Jacob H.

    2017-10-05

    This report documents seven enhancements to the U.S. Geological Survey (USGS) Precipitation-Runoff Modeling System (PRMS) hydrologic simulation code: two time-series input options, two new output options, and three updates of existing capabilities. The enhancements are (1) new dynamic parameter module, (2) new water-use module, (3) new Hydrologic Response Unit (HRU) summary output module, (4) new basin variables summary output module, (5) new stream and lake flow routing module, (6) update to surface-depression storage and flow simulation, and (7) update to the initial-conditions specification. This report relies heavily upon U.S. Geological Survey Techniques and Methods, book 6, chapter B7, which documents PRMS version 4 (PRMS-IV). A brief description of PRMS is included in this report.

  13. Multimodel simulations of forest harvesting effects on long‐term productivity and CN cycling in aspen forests.

    PubMed

    Wang, Fugui; Mladenoff, David J; Forrester, Jodi A; Blanco, Juan A; Schelle, Robert M; Peckham, Scott D; Keough, Cindy; Lucash, Melissa S; Gower, Stith T

    The effects of forest management on soil carbon (C) and nitrogen (N) dynamics vary by harvest type and species. We simulated long-term effects of bole-only harvesting of aspen (Populus tremuloides) on stand productivity and interaction of CN cycles with a multiple model approach. Five models, Biome-BGC, CENTURY, FORECAST, LANDIS-II with Century-based soil dynamics, and PnET-CN, were run for 350 yr with seven harvesting events on nutrient-poor, sandy soils representing northwestern Wisconsin, United States. Twenty CN state and flux variables were summarized from the models' outputs and statistically analyzed using ordination and variance analysis methods. The multiple models' averages suggest that bole-only harvest would not significantly affect long-term site productivity of aspen, though declines in soil organic matter and soil N were significant. Along with direct N removal by harvesting, extensive leaching after harvesting before canopy closure was another major cause of N depletion. These five models were notably different in output values of the 20 variables examined, although there were some similarities for certain variables. PnET-CN produced unique results for every variable, and CENTURY showed fewer outliers and similar temporal patterns to the mean of all models. In general, we demonstrated that when there are no site-specific data for fine-scale calibration and evaluation of a single model, the multiple model approach may be a more robust approach for long-term simulations. In addition, multimodeling may also improve the calibration and evaluation of an individual model.

  14. Countermovement depth - a variable which clarifies the relationship between the maximum power output and height of a vertical jump.

    PubMed

    Gajewski, Jan; Michalski, Radosław; Buśko, Krzysztof; Mazur-Różycka, Joanna; Staniak, Zbigniew

    2018-01-01

    The aim of this study was to identify the determinants of peak power achieved during vertical jumps in order to clarify relationship between the height of jump and the ability to exert maximum power. One hundred young (16.8±1.8 years) sportsmen participated in the study (body height 1.861 ± 0.109 m, body weight 80.3 ± 9.2 kg). Each participant performed three jump tests: countermovement jump (CMJ), akimbo countermovement jump (ACMJ), and spike jump (SPJ). A force plate was used to measure ground reaction force and to determine peak power output. The following explanatory variables were included in the model: jump height, body mass, and the lowering of the centre of mass before launch (countermovement depth). A model was created using multiple regression analysis and allometric scaling. The model was used to calculate the expected power value for each participant, which correlated strongly with real values. The value of the coefficient of determination R2 equalled 0.89, 0.90 and 0.98, respectively, for the CMJ, ACMJ, and SPJ jumps. The countermovement depth proved to be a variable strongly affecting the maximum power of jump. If the countermovement depth remains constant, the relative peak power is a simple function of jump height. The results suggest that the jump height of an individual is an exact indicator of their ability to produce maximum power. The presented model has a potential to be utilized under field condition for estimating the maximum power output of vertical jumps.

  15. Study of electrode slice forming of bicycle dynamo hub power connector

    NASA Astrophysics Data System (ADS)

    Chen, Dyi-Cheng; Jao, Chih-Hsuan

    2013-12-01

    Taiwan's bicycle industry has been an international reputation as bicycle kingdom, but the problem in the world makes global warming green energy rise, the development of electrode slice of hub dynamo and power output connector to bring new hope to bike industry. In this study connector power output to gather public opinion related to patent, basis of collected documents as basis for design, structural components in least drawn to power output with simple connector. Power output of this study objectives connector hope at least cost, structure strongest, highest efficiency in output performance characteristics such as use of computer-aided drawing software Solid works to establish power output connector parts of 3D model, the overall portfolio should be considered part types including assembly ideas, weather resistance, water resistance, corrosion resistance to vibration and power flow stability. Moreover the 3D model import computer-aided finite element analysis software simulation of expected the power output of the connector parts manufacturing process. A series of simulation analyses, in which the variables relied on first stage and second stage forming, were run to examine the effective stress, effective strain, press speed, and die radial load distribution when forming electrode slice of bicycle dynamo hub.

  16. Pelagic Habitat Analysis Module (PHAM) for GIS Based Fisheries Decision Support

    NASA Technical Reports Server (NTRS)

    Kiefer, D. A.; Armstrong, Edward M.; Harrison, D. P.; Hinton, M. G.; Kohin, S.; Snyder, S.; O'Brien, F. J.

    2011-01-01

    We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus & pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.We have assembled a system that integrates satellite and model output with fisheries data We have developed tools that allow analysis of the interaction between species and key environmental variables Demonstrated the capacity to accurately map habitat of Thresher Sharks Alopias vulpinus nd pelagicus. Their seasonal migration along the California Current is at least partly driven by the seasonal migration of sardine, key prey of the sharks.

  17. Optimal output fast feedback in two-time scale control of flexible arms

    NASA Technical Reports Server (NTRS)

    Siciliano, B.; Calise, A. J.; Jonnalagadda, V. R. P.

    1986-01-01

    Control of lightweight flexible arms moving along predefined paths can be successfully synthesized on the basis of a two-time scale approach. A model following control can be designed for the reduced order slow subsystem. The fast subsystem is a linear system in which the slow variables act as parameters. The flexible fast variables which model the deflections of the arm along the trajectory can be sensed through strain gage measurements. For full state feedback design the derivatives of the deflections need to be estimated. The main contribution of this work is the design of an output feedback controller which includes a fixed order dynamic compensator, based on a recent convergent numerical algorithm for calculating LQ optimal gains. The design procedure is tested by means of simulation results for the one link flexible arm prototype in the laboratory.

  18. Binary recursive partitioning: background, methods, and application to psychology.

    PubMed

    Merkle, Edgar C; Shaffer, Victoria A

    2011-02-01

    Binary recursive partitioning (BRP) is a computationally intensive statistical method that can be used in situations where linear models are often used. Instead of imposing many assumptions to arrive at a tractable statistical model, BRP simply seeks to accurately predict a response variable based on values of predictor variables. The method outputs a decision tree depicting the predictor variables that were related to the response variable, along with the nature of the variables' relationships. No significance tests are involved, and the tree's 'goodness' is judged based on its predictive accuracy. In this paper, we describe BRP methods in a detailed manner and illustrate their use in psychological research. We also provide R code for carrying out the methods.

  19. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    NASA Astrophysics Data System (ADS)

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-04-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.

  20. Artificial neural networks modelling the prednisolone nanoprecipitation in microfluidic reactors.

    PubMed

    Ali, Hany S M; Blagden, Nicholas; York, Peter; Amani, Amir; Brook, Toni

    2009-06-28

    This study employs artificial neural networks (ANNs) to create a model to identify relationships between variables affecting drug nanoprecipitation using microfluidic reactors. The input variables examined were saturation levels of prednisolone, solvent and antisolvent flow rates, microreactor inlet angles and internal diameters, while particle size was the single output. ANNs software was used to analyse a set of data obtained by random selection of the variables. The developed model was then assessed using a separate set of validation data and provided good agreement with the observed results. The antisolvent flow rate was found to have the dominant role on determining final particle size.

  1. USING METEOROLOGICAL MODEL OUTPUT AS A SURROGATE FOR ON-SITE OBSERVATIONS TO PREDICT DEPOSITION VELOCITY

    EPA Science Inventory

    The National Oceanic and Atmospheric Administration's Multi-Layer Model (NOAA-MLM) is used by several operational dry deposition networks for estimating the deposition velocity of O , SO , HNO , and particles. The NOAA-MLM requires hourly values of meteorological variables and...

  2. Examining impulse-variability in overarm throwing.

    PubMed

    Urbin, M A; Stodden, David; Boros, Rhonda; Shannon, David

    2012-01-01

    The purpose of this study was to examine variability in overarm throwing velocity and spatial output error at various percentages of maximum to test the prediction of an inverted-U function as predicted by impulse-variability theory and a speed-accuracy trade-off as predicted by Fitts' Law Thirty subjects (16 skilled, 14 unskilled) were instructed to throw a tennis ball at seven percentages of their maximum velocity (40-100%) in random order (9 trials per condition) at a target 30 feet away. Throwing velocity was measured with a radar gun and interpreted as an index of overall systemic power output. Within-subject throwing velocity variability was examined using within-subjects repeated-measures ANOVAs (7 repeated conditions) with built-in polynomial contrasts. Spatial error was analyzed using mixed model regression. Results indicated a quadratic fit with variability in throwing velocity increasing from 40% up to 60%, where it peaked, and then decreasing at each subsequent interval to maximum (p < .001, η2 = .555). There was no linear relationship between speed and accuracy. Overall, these data support the notion of an inverted-U function in overarm throwing velocity variability as both skilled and unskilled subjects approach maximum effort. However, these data do not support the notion of a speed-accuracy trade-off. The consistent demonstration of an inverted-U function associated with systemic power output variability indicates an enhanced capability to regulate aspects of force production and relative timing between segments as individuals approach maximum effort, even in a complex ballistic skill.

  3. Sources of signal-dependent noise during isometric force production.

    PubMed

    Jones, Kelvin E; Hamilton, Antonia F; Wolpert, Daniel M

    2002-09-01

    It has been proposed that the invariant kinematics observed during goal-directed movements result from reducing the consequences of signal-dependent noise (SDN) on motor output. The purpose of this study was to investigate the presence of SDN during isometric force production and determine how central and peripheral components contribute to this feature of motor control. Peripheral and central components were distinguished experimentally by comparing voluntary contractions to those elicited by electrical stimulation of the extensor pollicis longus muscle. To determine other factors of motor-unit physiology that may contribute to SDN, a model was constructed and its output compared with the empirical data. SDN was evident in voluntary isometric contractions as a linear scaling of force variability (SD) with respect to the mean force level. However, during electrically stimulated contractions to the same force levels, the variability remained constant over the same range of mean forces. When the subjects were asked to combine voluntary with stimulation-induced contractions, the linear scaling relationship between the SD and mean force returned. The modeling results highlight that much of the basic physiological organization of the motor-unit pool, such as range of twitch amplitudes and range of recruitment thresholds, biases force output to exhibit linearly scaled SDN. This is in contrast to the square root scaling of variability with mean force present in any individual motor-unit of the pool. Orderly recruitment by twitch amplitude was a necessary condition for producing linearly scaled SDN. Surprisingly, the scaling of SDN was independent of the variability of motoneuron firing and therefore by inference, independent of presynaptic noise in the motor command. We conclude that the linear scaling of SDN during voluntary isometric contractions is a natural by-product of the organization of the motor-unit pool that does not depend on signal-dependent noise in the motor command. Synaptic noise in the motor command and common drive, which give rise to the variability and synchronization of motoneuron spiking, determine the magnitude of the force variability at a given level of mean force output.

  4. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  5. Sensitivity Analysis of the Land Surface Model NOAH-MP for Different Model Fluxes

    NASA Astrophysics Data System (ADS)

    Mai, Juliane; Thober, Stephan; Samaniego, Luis; Branch, Oliver; Wulfmeyer, Volker; Clark, Martyn; Attinger, Sabine; Kumar, Rohini; Cuntz, Matthias

    2015-04-01

    Land Surface Models (LSMs) use a plenitude of process descriptions to represent the carbon, energy and water cycles. They are highly complex and computationally expensive. Practitioners, however, are often only interested in specific outputs of the model such as latent heat or surface runoff. In model applications like parameter estimation, the most important parameters are then chosen by experience or expert knowledge. Hydrologists interested in surface runoff therefore chose mostly soil parameters while biogeochemists interested in carbon fluxes focus on vegetation parameters. However, this might lead to the omission of parameters that are important, for example, through strong interactions with the parameters chosen. It also happens during model development that some process descriptions contain fixed values, which are supposedly unimportant parameters. However, these hidden parameters remain normally undetected although they might be highly relevant during model calibration. Sensitivity analyses are used to identify informative model parameters for a specific model output. Standard methods for sensitivity analysis such as Sobol indexes require large amounts of model evaluations, specifically in case of many model parameters. We hence propose to first use a recently developed inexpensive sequential screening method based on Elementary Effects that has proven to identify the relevant informative parameters. This reduces the number parameters and therefore model evaluations for subsequent analyses such as sensitivity analysis or model calibration. In this study, we quantify parametric sensitivities of the land surface model NOAH-MP that is a state-of-the-art LSM and used at regional scale as the land surface scheme of the atmospheric Weather Research and Forecasting Model (WRF). NOAH-MP contains multiple process parameterizations yielding a considerable amount of parameters (˜ 100). Sensitivities for the three model outputs (a) surface runoff, (b) soil drainage and (c) latent heat are calculated on twelve Model Parameter Estimation Experiment (MOPEX) catchments ranging in size from 1020 to 4421 km2. This allows investigation of parametric sensitivities for distinct hydro-climatic characteristics, emphasizing different land-surface processes. The sequential screening identifies the most informative parameters of NOAH-MP for different model output variables. The number of parameters is reduced substantially for all of the three model outputs to approximately 25. The subsequent Sobol method quantifies the sensitivities of these informative parameters. The study demonstrates the existence of sensitive, important parameters in almost all parts of the model irrespective of the considered output. Soil parameters, e.g., are informative for all three output variables whereas plant parameters are not only informative for latent heat but also for soil drainage because soil drainage is strongly coupled to transpiration through the soil water balance. These results contrast to the choice of only soil parameters in hydrological studies and only plant parameters in biogeochemical ones. The sequential screening identified several important hidden parameters that carry large sensitivities and have hence to be included during model calibration.

  6. Sediment fingerprinting experiments to test the sensitivity of multivariate mixing models

    NASA Astrophysics Data System (ADS)

    Gaspar, Leticia; Blake, Will; Smith, Hugh; Navas, Ana

    2014-05-01

    Sediment fingerprinting techniques provide insight into the dynamics of sediment transfer processes and support for catchment management decisions. As questions being asked of fingerprinting datasets become increasingly complex, validation of model output and sensitivity tests are increasingly important. This study adopts an experimental approach to explore the validity and sensitivity of mixing model outputs for materials with contrasting geochemical and particle size composition. The experiments reported here focused on (i) the sensitivity of model output to different fingerprint selection procedures and (ii) the influence of source material particle size distributions on model output. Five soils with significantly different geochemistry, soil organic matter and particle size distributions were selected as experimental source materials. A total of twelve sediment mixtures were prepared in the laboratory by combining different quantified proportions of the < 63 µm fraction of the five source soils i.e. assuming no fluvial sorting of the mixture. The geochemistry of all source and mixture samples (5 source soils and 12 mixed soils) were analysed using X-ray fluorescence (XRF). Tracer properties were selected from 18 elements for which mass concentrations were found to be significantly different between sources. Sets of fingerprint properties that discriminate target sources were selected using a range of different independent statistical approaches (e.g. Kruskal-Wallis test, Discriminant Function Analysis (DFA), Principal Component Analysis (PCA), or correlation matrix). Summary results for the use of the mixing model with the different sets of fingerprint properties for the twelve mixed soils were reasonably consistent with the initial mixing percentages initially known. Given the experimental nature of the work and dry mixing of materials, geochemical conservative behavior was assumed for all elements, even for those that might be disregarded in aquatic systems (e.g. P). In general, the best fits between actual and modeled proportions were found using a set of nine tracer properties (Sr, Rb, Fe, Ti, Ca, Al, P, Si, K, Si) that were derived using DFA coupled with a multivariate stepwise algorithm, with errors between real and estimated value that did not exceed 6.7 % and values of GOF above 94.5 %. The second set of experiments aimed to explore the sensitivity of model output to variability in the particle size of source materials assuming that a degree of fluvial sorting of the resulting mixture took place. Most particle size correction procedures assume grain size affects are consistent across sources and tracer properties which is not always the case. Consequently, the < 40 µm fraction of selected soil mixtures was analysed to simulate the effect of selective fluvial transport of finer particles and the results were compared to those for source materials. Preliminary findings from this experiment demonstrate the sensitivity of the numerical mixing model outputs to different particle size distributions of source material and the variable impact of fluvial sorting on end member signatures used in mixing models. The results suggest that particle size correction procedures require careful scrutiny in the context of variable source characteristics.

  7. Human-arm-and-hand-dynamic model with variability analyses for a stylus-based haptic interface.

    PubMed

    Fu, Michael J; Cavuşoğlu, M Cenk

    2012-12-01

    Haptic interface research benefits from accurate human arm models for control and system design. The literature contains many human arm dynamic models but lacks detailed variability analyses. Without accurate measurements, variability is modeled in a very conservative manner, leading to less than optimal controller and system designs. This paper not only presents models for human arm dynamics but also develops inter- and intrasubject variability models for a stylus-based haptic device. Data from 15 human subjects (nine male, six female, ages 20-32) were collected using a Phantom Premium 1.5a haptic device for system identification. In this paper, grip-force-dependent models were identified for 1-3-N grip forces in the three spatial axes. Also, variability due to human subjects and grip-force variation were modeled as both structured and unstructured uncertainties. For both forms of variability, the maximum variation, 95 %, and 67 % confidence interval limits were examined. All models were in the frequency domain with force as input and position as output. The identified models enable precise controllers targeted to a subset of possible human operator dynamics.

  8. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  9. Load estimator (LOADEST): a FORTRAN program for estimating constituent loads in streams and rivers

    USGS Publications Warehouse

    Runkel, Robert L.; Crawford, Charles G.; Cohn, Timothy A.

    2004-01-01

    LOAD ESTimator (LOADEST) is a FORTRAN program for estimating constituent loads in streams and rivers. Given a time series of streamflow, additional data variables, and constituent concentration, LOADEST assists the user in developing a regression model for the estimation of constituent load (calibration). Explanatory variables within the regression model include various functions of streamflow, decimal time, and additional user-specified data variables. The formulated regression model then is used to estimate loads over a user-specified time interval (estimation). Mean load estimates, standard errors, and 95 percent confidence intervals are developed on a monthly and(or) seasonal basis. The calibration and estimation procedures within LOADEST are based on three statistical estimation methods. The first two methods, Adjusted Maximum Likelihood Estimation (AMLE) and Maximum Likelihood Estimation (MLE), are appropriate when the calibration model errors (residuals) are normally distributed. Of the two, AMLE is the method of choice when the calibration data set (time series of streamflow, additional data variables, and concentration) contains censored data. The third method, Least Absolute Deviation (LAD), is an alternative to maximum likelihood estimation when the residuals are not normally distributed. LOADEST output includes diagnostic tests and warnings to assist the user in determining the appropriate estimation method and in interpreting the estimated loads. This report describes the development and application of LOADEST. Sections of the report describe estimation theory, input/output specifications, sample applications, and installation instructions.

  10. Reducing the Uncertainty in Atlantic Meridional Overturning Circulation Projections Using Bayesian Model Averaging

    NASA Astrophysics Data System (ADS)

    Olson, R.; An, S. I.

    2016-12-01

    Atlantic Meridional Overturning Circulation (AMOC) in the ocean might slow down in the future, which can lead to a host of climatic effects in North Atlantic and throughout the world. Despite improvements in climate models and availability of new observations, AMOC projections remain uncertain. Here we constrain CMIP5 multi-model ensemble output with observations of a recently developed AMOC index to provide improved Bayesian predictions of future AMOC. Specifically, we first calculate yearly AMOC index loosely based on Rahmstorf et al. (2015) for years 1880—2004 for both observations, and the CMIP5 models for which relevant output is available. We then assign a weight to each model based on a Bayesian Model Averaging method that accounts for differential model skill in terms of both mean state and variability. We include the temporal autocorrelation in climate model errors, and account for the uncertainty in the parameters of our statistical model. We use the weights to provide future weighted projections of AMOC, and compare them to un-weighted ones. Our projections use bootstrapping to account for uncertainty in internal AMOC variability. We also perform spectral and other statistical analyses to show that AMOC index variability, both in models and in observations, is consistent with red noise. Our results improve on and complement previous work by using a new ensemble of climate models, a different observational metric, and an improved Bayesian weighting method that accounts for differential model skill at reproducing internal variability. Reference: Rahmstorf, S., Box, J. E., Feulner, G., Mann, M. E., Robinson, A., Rutherford, S., & Schaffernicht, E. J. (2015). Exceptional twentieth-century slowdown in atlantic ocean overturning circulation. Nature Climate Change, 5(5), 475-480. doi:10.1038/nclimate2554

  11. Team performance in the Italian NHS: the role of reflexivity.

    PubMed

    Urbini, Flavio; Callea, Antonino; Chirumbolo, Antonio; Talamo, Alessandra; Ingusci, Emanuela; Ciavolino, Enrico

    2018-04-09

    Purpose The purpose of this paper is twofold: first, to investigate the goodness of the input-process-output (IPO) model in order to evaluate work team performance within the Italian National Health Care System (NHS); and second, to test the mediating role of reflexivity as an overarching process factor between input and output. Design/methodology/approach The Italian version of the Aston Team Performance Inventory was administered to 351 employees working in teams in the Italian NHS. Mediation analyses with latent variables were performed via structural equation modeling (SEM); the significance of total, direct, and indirect effect was tested via bootstrapping. Findings Underpinned by the IPO framework, the results of SEM supported mediational hypotheses. First, the application of the IPO model in the Italian NHS showed adequate fit indices, showing that the process mediates the relationship between input and output factors. Second, reflexivity mediated the relationship between input and output, influencing some aspects of team performance. Practical implications The results provide useful information for HRM policies improving process dimensions of the IPO model via the mediating role of reflexivity as a key role in team performance. Originality/value This study is one of a limited number of studies that applied the IPO model in the Italian NHS. Moreover, no study has yet examined the role of reflexivity as a mediator between input and output factors in the IPO model.

  12. Energy storage connection system

    DOEpatents

    Benedict, Eric L.; Borland, Nicholas P.; Dale, Magdelena; Freeman, Belvin; Kite, Kim A.; Petter, Jeffrey K.; Taylor, Brendan F.

    2012-07-03

    A power system for connecting a variable voltage power source, such as a power controller, with a plurality of energy storage devices, at least two of which have a different initial voltage than the output voltage of the variable voltage power source. The power system includes a controller that increases the output voltage of the variable voltage power source. When such output voltage is substantially equal to the initial voltage of a first one of the energy storage devices, the controller sends a signal that causes a switch to connect the variable voltage power source with the first one of the energy storage devices. The controller then causes the output voltage of the variable voltage power source to continue increasing. When the output voltage is substantially equal to the initial voltage of a second one of the energy storage devices, the controller sends a signal that causes a switch to connect the variable voltage power source with the second one of the energy storage devices.

  13. Selection of climate change scenario data for impact modelling.

    PubMed

    Sloth Madsen, M; Maule, C Fox; MacKellar, N; Olesen, J E; Christensen, J Hesselbjerg

    2012-01-01

    Impact models investigating climate change effects on food safety often need detailed climate data. The aim of this study was to select climate change projection data for selected crop phenology and mycotoxin impact models. Using the ENSEMBLES database of climate model output, this study illustrates how the projected climate change signal of important variables as temperature, precipitation and relative humidity depends on the choice of the climate model. Using climate change projections from at least two different climate models is recommended to account for model uncertainty. To make the climate projections suitable for impact analysis at the local scale a weather generator approach was adopted. As the weather generator did not treat all the necessary variables, an ad-hoc statistical method was developed to synthesise realistic values of missing variables. The method is presented in this paper, applied to relative humidity, but it could be adopted to other variables if needed.

  14. Optimized System Identification

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Longman, Richard W.

    1999-01-01

    In system identification, one usually cares most about finding a model whose outputs are as close as possible to the true system outputs when the same input is applied to both. However, most system identification algorithms do not minimize this output error. Often they minimize model equation error instead, as in typical least-squares fits using a finite-difference model, and it is seen here that this distinction is significant. Here, we develop a set of system identification algorithms that minimize output error for multi-input/multi-output and multi-input/single-output systems. This is done with sequential quadratic programming iterations on the nonlinear least-squares problems, with an eigendecomposition to handle indefinite second partials. This optimization minimizes a nonlinear function of many variables, and hence can converge to local minima. To handle this problem, we start the iterations from the OKID (Observer/Kalman Identification) algorithm result. Not only has OKID proved very effective in practice, it minimizes an output error of an observer which has the property that as the data set gets large, it converges to minimizing the criterion of interest here. Hence, it is a particularly good starting point for the nonlinear iterations here. Examples show that the methods developed here eliminate the bias that is often observed using any system identification methods of either over-estimating or under-estimating the damping of vibration modes in lightly damped structures.

  15. Summary of the key features of seven biomathematical models of human fatigue and performance.

    PubMed

    Mallis, Melissa M; Mejdal, Sig; Nguyen, Tammy T; Dinges, David F

    2004-03-01

    Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbély, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  16. Summary of the key features of seven biomathematical models of human fatigue and performance

    NASA Technical Reports Server (NTRS)

    Mallis, Melissa M.; Mejdal, Sig; Nguyen, Tammy T.; Dinges, David F.

    2004-01-01

    BACKGROUND: Biomathematical models that quantify the effects of circadian and sleep/wake processes on the regulation of alertness and performance have been developed in an effort to predict the magnitude and timing of fatigue-related responses in a variety of contexts (e.g., transmeridian travel, sustained operations, shift work). This paper summarizes key features of seven biomathematical models reviewed as part of the Fatigue and Performance Modeling Workshop held in Seattle, WA, on June 13-14, 2002. The Workshop was jointly sponsored by the National Aeronautics and Space Administration, U.S. Department of Defense, U.S. Army Medical Research and Materiel Command, Office of Naval Research, Air Force Office of Scientific Research, and U.S. Department of Transportation. METHODS: An invitation was sent to developers of seven biomathematical models that were commonly cited in scientific literature and/or supported by government funding. On acceptance of the invitation to attend the Workshop, developers were asked to complete a survey of the goals, capabilities, inputs, and outputs of their biomathematical models of alertness and performance. Data from the completed surveys were summarized and juxtaposed to provide a framework for comparing features of the seven models. RESULTS: Survey responses revealed that models varied greatly relative to their reported goals and capabilities. While all modelers reported that circadian factors were key components of their capabilities, they differed markedly with regard to the roles of sleep and work times as input factors for prediction: four of the seven models had work time as their sole input variable(s), while the other three models relied on various aspects of sleep timing for model input. Models also differed relative to outputs: five sought to predict results from laboratory experiments, field, and operational data, while two models were developed without regard to predicting laboratory experimental results. All modelers provided published papers describing their models, with three of the models being proprietary. CONCLUSIONS: Although all models appear to have been fundamentally influenced by the two-process model of sleep regulation by Borbely, there is considerable diversity among them in the number and type of input and output variables, and their stated goals and capabilities.

  17. Relevance of Regional Hydro-Climatic Projection Data for Hydrodynamics and Water Quality Modelling of the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Goldenberg, R.; Vigouroux, G.; Chen, Y.; Bring, A.; Kalantari, Z.; Prieto, C.; Destouni, G.

    2017-12-01

    The Baltic Sea, located in Northern Europe, is one of the world's largest body of brackish water, enclosed and surrounded by nine different countries. The magnitude of climate change may be particularly large in northern regions, and identifying its impacts on vulnerable inland waters and their runoff and nutrient loading to the Baltic Sea is an important and complex task. Exploration of such hydro-climatic impacts is needed to understand potential future changes in physical, ecological and water quality conditions in the regional coastal and marine waters. In this study, we investigate hydro-climatic changes and impacts on the Baltic Sea by synthesizing multi-model climate projection data from the CORDEX regional downscaling initiative (EURO- and Arctic- CORDEX domains, http://www.cordex.org/). We identify key hydro-climatic variable outputs of these models and assess model performance with regard to their projected temporal and spatial change behavior and impacts on different scales and coastal-marine parts, up to the whole Baltic Sea. Model spreading, robustness and impact implications for the Baltic Sea system are investigated for and through further use in simulations of coastal-marine hydrodynamics and water quality based on these key output variables and their change projections. Climate model robustness in this context is assessed by inter-model spreading analysis and observation data comparisons, while projected change implications are assessed by forcing of linked hydrodynamic and water quality modeling of the Baltic Sea based on relevant hydro-climatic outputs for inland water runoff and waterborne nutrient loading to the Baltic sea, as well as for conditions in the sea itself. This focused synthesis and analysis of hydro-climatically relevant output data of regional climate models facilitates assessment of reliability and uncertainty in projections of driver-impact changes of key importance for Baltic Sea physical, water quality and ecological conditions and their future evolution.

  18. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  19. Measurement of unsteady loading and power output variability in a micro wind farm model in a wind tunnel

    NASA Astrophysics Data System (ADS)

    Bossuyt, Juliaan; Howland, Michael F.; Meneveau, Charles; Meyers, Johan

    2017-01-01

    Unsteady loading and spatiotemporal characteristics of power output are measured in a wind tunnel experiment of a microscale wind farm model with 100 porous disk models. The model wind farm is placed in a scaled turbulent boundary layer, and six different layouts, varied from aligned to staggered, are considered. The measurements are done by making use of a specially designed small-scale porous disk model, instrumented with strain gages. The frequency response of the measurements goes up to the natural frequency of the model, which corresponds to a reduced frequency of 0.6 when normalized by the diameter and the mean hub height velocity. The equivalent range of timescales, scaled to field-scale values, is 15 s and longer. The accuracy and limitations of the acquisition technique are documented and verified with hot-wire measurements. The spatiotemporal measurement capabilities of the experimental setup are used to study the cross-correlation in the power output of various porous disk models of wind turbines. A significant correlation is confirmed between streamwise aligned models, while staggered models show an anti-correlation.

  20. Analysis of a utility-interactive wind-photovoltaic hybrid system with battery storage using neural network

    NASA Astrophysics Data System (ADS)

    Giraud, Francois

    1999-10-01

    This dissertation investigates the application of neural network theory to the analysis of a 4-kW Utility-interactive Wind-Photovoltaic System (WPS) with battery storage. The hybrid system comprises a 2.5-kW photovoltaic generator and a 1.5-kW wind turbine. The wind power generator produces power at variable speed and variable frequency (VSVF). The wind energy is converted into dc power by a controlled, tree-phase, full-wave, bridge rectifier. The PV power is maximized by a Maximum Power Point Tracker (MPPT), a dc-to-dc chopper, switching at a frequency of 45 kHz. The whole dc power of both subsystems is stored in the battery bank or conditioned by a single-phase self-commutated inverter to be sold to the utility at a predetermined amount. First, the PV is modeled using Artificial Neural Network (ANN). To reduce model uncertainty, the open-circuit voltage VOC and the short-circuit current ISC of the PV are chosen as model input variables of the ANN. These input variables have the advantage of incorporating the effects of the quantifiable and non-quantifiable environmental variants affecting the PV power. Then, a simplified way to predict accurately the dynamic responses of the grid-linked WPS to gusty winds using a Recurrent Neural Network (RNN) is investigated. The RNN is a single-output feedforward backpropagation network with external feedback, which allows past responses to be fed back to the network input. In the third step, a Radial Basis Functions (RBF) Network is used to analyze the effects of clouds on the Utility-Interactive WPS. Using the irradiance as input signal, the network models the effects of random cloud movement on the output current, the output voltage, the output power of the PV system, as well as the electrical output variables of the grid-linked inverter. Fourthly, using RNN, the combined effects of a random cloud and a wind gusts on the system are analyzed. For short period intervals, the wind speed and the solar radiation are considered as the sole sources of power, whose variations influence the system variables. Since both subsystems have different dynamics, their respective responses are expected to impact differently the whole system behavior. The dispatchability of the battery-supported system as well as its stability and reliability during gusts and/or cloud passage is also discussed. In the fifth step, the goal is to determine to what extent the overall power quality of the grid would be affected by a proliferation of Utility-interactive hybrid system and whether recourse to bulky or individual filtering and voltage controller is necessary. The final stage of the research includes a steady-state analysis of two-year operation (May 96--Apr 98) of the system, with a discussion on system reliability, on any loss of supply probability, and on the effects of the randomness in the wind and solar radiation upon the system design optimization.

  1. Nonlinear predictive control of a boiler-turbine unit: A state-space approach with successive on-line model linearisation and quadratic optimisation.

    PubMed

    Ławryńczuk, Maciej

    2017-03-01

    This paper details development of a Model Predictive Control (MPC) algorithm for a boiler-turbine unit, which is a nonlinear multiple-input multiple-output process. The control objective is to follow set-point changes imposed on two state (output) variables and to satisfy constraints imposed on three inputs and one output. In order to obtain a computationally efficient control scheme, the state-space model is successively linearised on-line for the current operating point and used for prediction. In consequence, the future control policy is easily calculated from a quadratic optimisation problem. For state estimation the extended Kalman filter is used. It is demonstrated that the MPC strategy based on constant linear models does not work satisfactorily for the boiler-turbine unit whereas the discussed algorithm with on-line successive model linearisation gives practically the same trajectories as the truly nonlinear MPC controller with nonlinear optimisation repeated at each sampling instant. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Phenomenological model of maize starches expansion by extrusion

    NASA Astrophysics Data System (ADS)

    Kristiawan, M.; Della Valle, G.; Kansou, K.; Ndiaye, A.; Vergnes, B.

    2016-10-01

    During extrusion of starchy products, the molten material is forced through a die so that the sudden abrupt pressure drop causes part of the water to vaporize giving an expanded, cellular structure. The objective of this work was to elaborate a phenomenological model of expansion and couple it with Ludovic® mechanistic model of twin screw extrusion process. From experimental results that cover a wide range of thermomechanical conditions, a concept map of influence relationships between input and output variables was built. It took into account the phenomena of bubbles nucleation, growth, coalescence, shrinkage and setting, in a viscoelastic medium. The input variables were the moisture content MC, melt temperature T, specific mechanical energy SME, shear viscosity η at the die exit, computed by Ludovic®, and the melt storage moduli E'(at T > Tg). The outputs of the model were the macrostructure (volumetric expansion index VEI, anisotropy) and cellular structure (fineness F) of solid foams. Then a general model was established: VEI = α (η/η0)n in which α and n depend on T, MC, SME and E' and the link between anisotropy and fineness was established.

  3. Fuzzy logic modeling of the resistivity parameter and topography features for aquifer assessment in hydrogeological investigation of a crystalline basement complex

    NASA Astrophysics Data System (ADS)

    Adabanija, M. A.; Omidiora, E. O.; Olayinka, A. I.

    2008-05-01

    A linguistic fuzzy logic system (LFLS)-based expert system model has been developed for the assessment of aquifers for the location of productive water boreholes in a crystalline basement complex. The model design employed a multiple input/single output (MISO) approach with geoelectrical parameters and topographic features as input variables and control crisp value as the output. The application of the method to the data acquired in Khondalitic terrain, a basement complex in Vizianagaram District, south India, shows that potential groundwater resource zones that have control output values in the range 0.3295-0.3484 have a yield greater than 6,000 liters per hour (LPH). The range 0.3174-0.3226 gives a yield less than 4,000 LPH. The validation of the control crisp value using data acquired from Oban Massif, a basement complex in southeastern Nigeria, indicates a yield less than 3,000 LPH for control output values in the range 0.2938-0.3065. This validation corroborates the ability of control output values to predict a yield, thereby vindicating the applicability of linguistic fuzzy logic system in siting productive water boreholes in a basement complex.

  4. High Velocity Jet Noise Source Location and Reduction. Task 6. Supplement. Computer Programs: Engineering Correlation (M*S) Jet Noise Prediction Method and Unified Aeroacoustic Prediction Model (M*G*B) for Nozzles of Arbitary Shape.

    DTIC Science & Technology

    1979-03-01

    LSPFIT 112 4.3.5 SLICE 112 4.3.6 CRD 113 4.3.7 OUTPUT 113 4.3.8 SHOCK 115 4.3.9 ATMOS 115 4.3.10 PNLC 115 4.4 Program Usage and Logic 116 4.5 Description...number MAIN, SLICE, OUTPUT F Intermediate variable LSPFIT FAC Intermediate variable PNLC FC Center frequency SLICE FIRSTU Flight velocity Ua MAIN, SLICE...Index CRD J211 Index CRD K Index, also wave number MAIN, SLICE, PNLC KN Surrounding boundary index MAIN KNCAS Case counter MAIN KNK Surrounding

  5. The Geothermal Probabilistic Cost Model with an Application to a Geothermal Reservoir at Heber, California

    NASA Technical Reports Server (NTRS)

    Orren, L. H.; Ziman, G. M.; Jones, S. C.

    1981-01-01

    A financial accounting model that incorporates physical and institutional uncertainties was developed for geothermal projects. Among the uncertainties it can handle are well depth, flow rate, fluid temperature, and permit and construction times. The outputs of the model are cumulative probability distributions of financial measures such as capital cost, levelized cost, and profit. These outputs are well suited for use in an investment decision incorporating risk. The model has the powerful feature that conditional probability distribution can be used to account for correlations among any of the input variables. The model has been applied to a geothermal reservoir at Heber, California, for a 45-MW binary electric plant. Under the assumptions made, the reservoir appears to be economically viable.

  6. Detection of Bi-Directionality in Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2012-01-01

    An indicator variable was developed for both visualization and detection of bi-directionality in wind tunnel strain-gage balance calibration data. First, the calculation of the indicator variable is explained in detail. Then, a criterion is discussed that may be used to decide which gage outputs of a balance have bi- directional behavior. The result of this analysis could be used, for example, to justify the selection of certain absolute value or other even function terms in the regression model of gage outputs whenever the Iterative Method is chosen for the balance calibration data analysis. Calibration data of NASA s MK40 Task balance is analyzed to illustrate both the calculation of the indicator variable and the application of the proposed criterion. Finally, bi directionality characteristics of typical multi piece, hybrid, single piece, and semispan balances are determined and discussed.

  7. Theoretical foundations for environmental Kuznets curve analysis

    NASA Astrophysics Data System (ADS)

    Lantz, Van

    This thesis provides a dynamic theory for analyzing the paths of aggregate output and pollution in a country over time. An infinite horizon, competitive growth-pollution model is explored in order to determine the role that economic scale, production techniques, and pollution regulations play in explaining the inverted U-shaped relationship between output and some forms of pollution (otherwise known as the Environmental Kuznets Curve, or EKC). Results indicate that the output-pollution relationship may follow a strictly increasing, strictly decreasing (but bounded), inverted U-shaped, or some combination of curves. While the 'scale' effect may cause output and pollution to exhibit a monotonic relationship, 'technique' and 'regulation' effects may ultimately cause a de-linking of these two variables. Pollution-minimizing energy regulation policies are also investigated within this framework. It is found that the EKC may be 'flattened' or even eliminated moving from a poorly-regulated economy to one that minimizes pollution. The model is calibrated to the US economy for output (gross national product, GNP) and two pollutants (sulfur dioxide, SO2, and carbon dioxide, CO2) over the period 1900 to 1990. Results indicate that the model replicates the observations quite well. The predominance of 'scale' effects cause aggregate SO2 and CO2 levels to increase with GNP in the early stages of development. Then, in the case of SO 2, 'technique' and 'regulation' effects may be the cause of falling SO2 levels with continued economic growth (establishing the EKC). CO2 continues to monotonically increase as output levels increase over time. The positive relationship may be due to the lack of regulations on this pollutant. If stricter regulation policies were instituted in the two case studies, an improved allocation of resources may result. While GNP may be 2.596 to 20% lower than what has been realized in the US economy (depending on the pollution variable analyzed), individual welfare may increase from lower pollution levels.

  8. Using Propensity Score Matching to Model Retention of Developmental Math Students in Community Colleges in North Carolina

    ERIC Educational Resources Information Center

    Frye, Bobbie Jean

    2014-01-01

    Traditionally, modeling student retention has been done by deriving student success predictors and measuring the likelihood of success based on several background factors such as age, race, gender, and other pre-college variables, also known as the input-output model. Increasingly, however, researchers have used mediating factors of the student…

  9. Assessment of the Suitability of High Resolution Numerical Weather Model Outputs for Hydrological Modelling in Mountainous Cold Regions

    NASA Astrophysics Data System (ADS)

    Rasouli, K.; Pomeroy, J. W.; Hayashi, M.; Fang, X.; Gutmann, E. D.; Li, Y.

    2017-12-01

    The hydrology of mountainous cold regions has a large spatial variability that is driven both by climate variability and near-surface process variability associated with complex terrain and patterns of vegetation, soils, and hydrogeology. There is a need to downscale large-scale atmospheric circulations towards the fine scales that cold regions hydrological processes operate at to assess their spatial variability in complex terrain and quantify uncertainties by comparison to field observations. In this research, three high resolution numerical weather prediction models, namely, the Intermediate Complexity Atmosphere Research (ICAR), Weather Research and Forecasting (WRF), and Global Environmental Multiscale (GEM) models are used to represent spatial and temporal patterns of atmospheric conditions appropriate for hydrological modelling. An area covering high mountains and foothills of the Canadian Rockies was selected to assess and compare high resolution ICAR (1 km × 1 km), WRF (4 km × 4 km), and GEM (2.5 km × 2.5 km) model outputs with station-based meteorological measurements. ICAR with very low computational cost was run with different initial and boundary conditions and with finer spatial resolution, which allowed an assessment of modelling uncertainty and scaling that was difficult with WRF. Results show that ICAR, when compared with WRF and GEM, performs very well in precipitation and air temperature modelling in the Canadian Rockies, while all three models show a fair performance in simulating wind and humidity fields. Representation of local-scale atmospheric dynamics leading to realistic fields of temperature and precipitation by ICAR, WRF, and GEM makes these models suitable for high resolution cold regions hydrological predictions in complex terrain, which is a key factor in estimating water security in western Canada.

  10. Solar Irradiance Variability is Caused by the Magnetic Activity on the Solar Surface.

    PubMed

    Yeo, Kok Leng; Solanki, Sami K; Norris, Charlotte M; Beeck, Benjamin; Unruh, Yvonne C; Krivova, Natalie A

    2017-09-01

    The variation in the radiative output of the Sun, described in terms of solar irradiance, is important to climatology. A common assumption is that solar irradiance variability is driven by its surface magnetism. Verifying this assumption has, however, been hampered by the fact that models of solar irradiance variability based on solar surface magnetism have to be calibrated to observed variability. Making use of realistic three-dimensional magnetohydrodynamic simulations of the solar atmosphere and state-of-the-art solar magnetograms from the Solar Dynamics Observatory, we present a model of total solar irradiance (TSI) that does not require any such calibration. In doing so, the modeled irradiance variability is entirely independent of the observational record. (The absolute level is calibrated to the TSI record from the Total Irradiance Monitor.) The model replicates 95% of the observed variability between April 2010 and July 2016, leaving little scope for alternative drivers of solar irradiance variability at least over the time scales examined (days to years).

  11. Adaptive optimal input design and parametric estimation of nonlinear dynamical systems: application to neuronal modeling.

    PubMed

    Madi, Mahmoud K; Karameh, Fadi N

    2018-05-11

    Many physical models of biological processes including neural systems are characterized by parametric nonlinear dynamical relations between driving inputs, internal states, and measured outputs of the process. Fitting such models using experimental data (data assimilation) is a challenging task since the physical process often operates in a noisy, possibly non-stationary environment; moreover, conducting multiple experiments under controlled and repeatable conditions can be impractical, time consuming or costly. The accuracy of model identification, therefore, is dictated principally by the quality and dynamic richness of collected data over single or few experimental sessions. Accordingly, it is highly desirable to design efficient experiments that, by exciting the physical process with smart inputs, yields fast convergence and increased accuracy of the model. We herein introduce an adaptive framework in which optimal input design is integrated with Square root Cubature Kalman Filters (OID-SCKF) to develop an online estimation procedure that first, converges significantly quicker, thereby permitting model fitting over shorter time windows, and second, enhances model accuracy when only few process outputs are accessible. The methodology is demonstrated on common nonlinear models and on a four-area neural mass model with noisy and limited measurements. Estimation quality (speed and accuracy) is benchmarked against high-performance SCKF-based methods that commonly employ dynamically rich informed inputs for accurate model identification. For all the tested models, simulated single-trial and ensemble averages showed that OID-SCKF exhibited (i) faster convergence of parameter estimates and (ii) lower dependence on inter-trial noise variability with gains up to around 1000 msec in speed and 81% increase in variability for the neural mass models. In terms of accuracy, OID-SCKF estimation was superior, and exhibited considerably less variability across experiments, in identifying model parameters of (a) systems with challenging model inversion dynamics and (b) systems with fewer measurable outputs that directly relate to the underlying processes. Fast and accurate identification therefore carries particular promise for modeling of transient (short-lived) neuronal network dynamics using a spatially under-sampled set of noisy measurements, as is commonly encountered in neural engineering applications. © 2018 IOP Publishing Ltd.

  12. Evaluating soil carbon in global climate models: benchmarking, future projections, and model drivers

    NASA Astrophysics Data System (ADS)

    Todd-Brown, K. E.; Randerson, J. T.; Post, W. M.; Allison, S. D.

    2012-12-01

    The carbon cycle plays a critical role in how the climate responds to anthropogenic carbon dioxide. To evaluate how well Earth system models (ESMs) from the Climate Model Intercomparison Project (CMIP5) represent the carbon cycle, we examined predictions of current soil carbon stocks from the historical simulation. We compared the soil and litter carbon pools from 17 ESMs with data on soil carbon stocks from the Harmonized World Soil Database (HWSD). We also examined soil carbon predictions for 2100 from 16 ESMs from the rcp85 (highest radiative forcing) simulation to investigate the effects of climate change on soil carbon stocks. In both analyses, we used a reduced complexity model to separate the effects of variation in model drivers from the effects of model parameters on soil carbon predictions. Drivers included NPP, soil temperature, and soil moisture, and the reduced complexity model represented one pool of soil carbon as a function of these drivers. The ESMs predicted global soil carbon totals of 500 to 2980 Pg-C, compared to 1260 Pg-C in the HWSD. This 5-fold variation in predicted soil stocks was a consequence of a 3.4-fold variation in NPP inputs and 3.8-fold variability in mean global turnover times. None of the ESMs correlated well with the global distribution of soil carbon in the HWSD (Pearson's correlation <0.40, RMSE 9-22 kg m-2). On a biome level there was a broad range of agreement between the ESMs and the HWSD. Some models predicted HWSD biome totals well (R2=0.91) while others did not (R2=0.23). All of the ESM terrestrial decomposition models are structurally similar with outputs that were well described by a reduced complexity model that included NPP and soil temperature (R2 of 0.73-0.93). However, MPI-ESM-LR outputs showed only a moderate fit to this model (R2=0.51), and CanESM2 outputs were better described by a reduced model that included soil moisture (R2=0.74), We also found a broad range in soil carbon responses to climate change predicted by the ESMs, with changes of -480 to 230 Pg-C from 2005-2100. All models that reported NPP and heterotrophic respiration showed increases in both of these processes over the simulated period. In two of the models, soils switched from a global sink for carbon to a net source. Of the remaining models, half predicted that soils were a sink for carbon throughout the time period and the other half predicted that soils were a carbon source.. Heterotrophic respiration in most of the models from 2005-2100 was well explained by a reduced complexity model dependent on soil carbon, soil temperature, and soil moisture (R2 values >0.74). However, MPI-ESM (R2=0.45) showed only moderate fit to this model. Our analysis shows that soil carbon predictions from ESMs are highly variable, with much of this variability due to model parameterization and variations in driving variables. Furthermore, our reduced complexity models show that most variation in ESM outputs can be explained by a simple one-pool model with a small number of drivers and parameters. Therefore, agreement between soil carbon predictions across models could improve substantially by reconciling differences in driving variables and the parameters that link soil carbon with environmental drivers. However it is unclear if this model agreement would reflect what is truly happening in the Earth system.

  13. Searching for the right scale in catchment hydrology: the effect of soil spatial variability in simulated states and fluxes

    NASA Astrophysics Data System (ADS)

    Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine

    2017-04-01

    The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029

  14. Current and future groundwater recharge in West Africa as estimated from a range of coupled climate model outputs

    NASA Astrophysics Data System (ADS)

    Verhoef, Anne; Cook, Peter; Black, Emily; Macdonald, David; Sorensen, James

    2017-04-01

    This research addresses the terrestrial water balance for West Africa. Emphasis is on the prediction of groundwater recharge and how this may change in the future, which has relevance to the management of surface and groundwater resources. The study was conducted as part of the BRAVE research project, "Building understanding of climate variability into planning of groundwater supplies from low storage aquifers in Africa - Second Phase", funded under the NERC/DFID/ESRC Programme, Unlocking the Potential of Groundwater for the Poor (UPGro). We used model output data of water balance components (precipitation, surface and subsurface run-off, evapotranspiration and soil moisture content) from ERA-Interim/ERA-LAND reanalysis, CMIP5, and high resolution model runs with HadGEM3 (UPSCALE; Mizielinski et al., 2014), for current and future time-periods. Water balance components varied widely between the different models; variation was particularly large for sub-surface runoff (defined as drainage from the bottom-most soil layer of each model). In-situ data for groundwater recharge obtained from the peer-reviewed literature were compared with the model outputs. Separate off-line model sensitivity studies with key land surface models were performed to gain understanding of the reasons behind the model differences. These analyses were centered on vegetation, and soil hydraulic parameters. The modelled current and future recharge time series that had the greatest degree of confidence were used to examine the spatiotemporal variability in groundwater storage. Finally, the implications for water supply planning were assessed. Mizielinski, M.S. et al., 2014. High-resolution global climate modelling: the UPSCALE project, a large-simulation campaign. Geoscientific Model Development, 7(4), pp.1629-1640.

  15. Substorm Electric And Magnetic Fields In The Earth's Magnetotail: Observations Compared To The WINDMI Model

    NASA Astrophysics Data System (ADS)

    Srinivas, P. G.; Spencer, E. A.; Vadepu, S. K.; Horton, W., Jr.

    2017-12-01

    We compare satellite observations of substorm electric fields and magnetic fields to the output of a low dimensional nonlinear physics model of the nightside magnetosphere called WINDMI. The electric and magnetic field satellite data are used to calculate the E X B drift, which is one of the intermediate variables of the WINDMI model. The model uses solar wind and IMF measurements from the ACE spacecraft as input into a system of 8 nonlinear ordinary differential equations. The state variables of the differential equations represent the energy stored in the geomagnetic tail, central plasma sheet, ring current and field aligned currents. The output from the model is the ground based geomagnetic westward auroral electrojet (AL) index, and the Dst index.Using ACE solar wind data, IMF data and SuperMAG identification of substorm onset times up to December 2015, we constrain the WINDMI model to trigger substorm events, and compare the model intermediate variables to THEMIS and GEOTAIL satellite data in the magnetotail. By forcing the model to be consistent with satellite electric and magnetic field observations, we are able to track the magnetotail energy dynamics, the field aligned current contributions, energy injections into the ring current, and ensure that they are within allowable limts. In addition we are able to constrain the physical parameters of the model, in particular the lobe inductance, the plasma sheet capacitance, and the resistive and conductive parameters in the plasma sheet and ionosphere.

  16. A Reliability Estimation in Modeling Watershed Runoff With Uncertainties

    NASA Astrophysics Data System (ADS)

    Melching, Charles S.; Yen, Ben Chie; Wenzel, Harry G., Jr.

    1990-10-01

    The reliability of simulation results produced by watershed runoff models is a function of uncertainties in nature, data, model parameters, and model structure. A framework is presented here for using a reliability analysis method (such as first-order second-moment techniques or Monte Carlo simulation) to evaluate the combined effect of the uncertainties on the reliability of output hydrographs from hydrologic models. For a given event the prediction reliability can be expressed in terms of the probability distribution of the estimated hydrologic variable. The peak discharge probability for a watershed in Illinois using the HEC-1 watershed model is given as an example. The study of the reliability of predictions from watershed models provides useful information on the stochastic nature of output from deterministic models subject to uncertainties and identifies the relative contribution of the various uncertainties to unreliability of model predictions.

  17. Integrated Approach to Inform the New York City Water Supply System Coupling SAR Remote Sensing Observations and the SWAT Watershed Model

    NASA Astrophysics Data System (ADS)

    Tesser, D.; Hoang, L.; McDonald, K. C.

    2017-12-01

    Efforts to improve municipal water supply systems increasingly rely on an ability to elucidate variables that drive hydrologic dynamics within large watersheds. However, fundamental model variables such as precipitation, soil moisture, evapotranspiration, and soil freeze/thaw state remain difficult to measure empirically across large, heterogeneous watersheds. Satellite remote sensing presents a method to validate these spatially and temporally dynamic variables as well as better inform the watershed models that monitor the water supply for many of the planet's most populous urban centers. PALSAR 2 L-band, Sentinel 1 C-band, and SMAP L-band scenes covering the Cannonsville branch of the New York City (NYC) water supply watershed were obtained for the period of March 2015 - October 2017. The SAR data provides information on soil moisture, free/thaw state, seasonal surface inundation, and variable source areas within the study site. Integrating the remote sensing products with watershed model outputs and ground survey data improves the representation of related processes in the Soil and Water Assessment Tool (SWAT) utilized to monitor the NYC water supply. PALSAR 2 supports accurate mapping of the extent of variable source areas while Sentinel 1 presents a method to model the timing and magnitude of snowmelt runoff events. SMAP Active Radar soil moisture product directly validates SWAT outputs at the subbasin level. This blended approach verifies the distribution of soil wetness classes within the watershed that delineate Hydrologic Response Units (HRUs) in the modified SWAT-Hillslope. The research expands the ability to model the NYC water supply source beyond a subset of the watershed while also providing high resolution information across a larger spatial scale. The global availability of these remote sensing products provides a method to capture fundamental hydrology variables in regions where current modeling efforts and in situ data remain limited.

  18. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  19. BOREAS RSS-8 BIOME-BGC Model Simulations at Tower Flux Sites in 1994

    NASA Technical Reports Server (NTRS)

    Hall, Forrest G. (Editor); Nickeson, Jaime (Editor); Kimball, John

    2000-01-01

    BIOME-BGC is a general ecosystem process model designed to simulate biogeochemical and hydrologic processes across multiple scales (Running and Hunt, 1993). In this investigation, BIOME-BGC was used to estimate daily water and carbon budgets for the BOREAS tower flux sites for 1994. Carbon variables estimated by the model include gross primary production (i.e., net photosynthesis), maintenance and heterotrophic respiration, net primary production, and net ecosystem carbon exchange. Hydrologic variables estimated by the model include snowcover, evaporation, transpiration, evapotranspiration, soil moisture, and outflow. The information provided by the investigation includes input initialization and model output files for various sites in tabular ASCII format.

  20. [Air pollution in an urban area nearby the Rome-Ciampino city airport].

    PubMed

    Di Menno di Bucchianico, Alessandro; Cattani, Giorgio; Gaeta, Alessandra; Caricchia, Anna Maria; Troiano, Francesco; Sozzi, Roberto; Bolignano, Andrea; Sacco, Fabrizio; Damizia, Sesto; Barberini, Silvia; Caleprico, Roberta; Fabozzi, Tina; Ancona, Carla; Ancona, Laura; Cesaroni, Giulia; Forastiere, Francesco; Gobbi, Gian Paolo; Costabile, Francesca; Angelini, Federico; Barnaba, Francesca; Inglessis, Marco; Tancredi, Francesco; Palumbo, Lorenzo; Fontana, Luca; Bergamaschi, Antonio; Iavicoli, Ivo

    2014-01-01

    to assess air pollution spatial and temporal variability in the urban area nearby the Ciampino International Airport (Rome) and to investigate the airport-related emissions contribute. the study domain was a 64 km2 area around the airport. Two fifteen-day monitoring campaigns (late spring, winter) were carried out. Results were evaluated using several runs outputs of an airport-related sources Lagrangian particle model and a photochemical model (the Flexible Air quality Regional Model, FARM). both standard and high time resolution air pollutant concentrations measurements: CO, NO, NO2, C6H6, mass and number concentration of several PM fractions. 46 fixed points (spread over the study area) of NO2 and volatile organic compounds concentrations (fifteen days averages); deterministic models outputs. standard time resolution measurements, as well as model outputs, showed the airport contribution to air pollution levels being little compared to the main source in the area (i.e. vehicular traffic). However, using high time resolution measurements, peaks of particles associated with aircraft takeoff (total number concentration and soot mass concentration), and landing (coarse mass concentration) were observed, when the site measurement was downwind to the runway. the frequently observed transient spikes associated with aircraft movements could lead to a not negligible contribute to ultrafine, soot and coarse particles exposure of people living around the airport. Such contribute and its spatial and temporal variability should be investigated when assessing the airports air quality impact.

  1. Pollen-Based Inverse Modelling versus Data Assimilation, two Different Ways to Consider Priors in Paleoclimate Reconstruction: Application to the Mediterranean Holocene

    NASA Astrophysics Data System (ADS)

    Guiot, J.

    2017-12-01

    In the last decades, climate reconstruction has much evolved. A important step has been passed with inverse modelling approach proposed by Guiot et al (2000). It is based on appropriate algorithms in the frame of the Bayesian statistical theory to estimate the inputs of a vegetation model when the outputs are known. The inputs are the climate variables that we want to reconstruct and the outputs are vegetation characteristics, which can be compared to pollen data. The Bayesian framework consists in defining prior distribution of the wanted climate variables and in using data and a model to estimate posterior probability distribution. The main interest of the method is the possibility to set different values of exogenous variables as the atmospheric CO2 concentration. The fact that the CO2 concentration has an influence on the photosynthesis and that its level is different between the calibration period (the 20th century) and the past, there is an important risk of biases on the reconstructions. After that initial paper, numerous papers have been published showing the interested of the method. In that approach, the prior distribution is fixed by educated guess of by using complementary information on the expected climate (other proxies or other records). In the data assimilation approach, the prior distribution is provided by a climate model. The use of a vegetation model together with proxy data, enable to calculate posterior distributions. Data assimilation consists in constraining climate model to reproduce estimates relatively close to the data, taking into account the respective errors of the data and of the climate model (Dubinkina et al, 2011). We compare both approaches using pollen data for the Holocene from the Mediterranean. Pollen data have been extracted from the European Pollen Database. The earth system model, LOVECLIM, is run to simulate Holocene climate with appropriate boundary conditions and realistic forcing. Simulated climate variables (temperature, precipitation and sunshine) are used as the forcing parameters to a vegetation model, BIOME4, that calculates the equilibrium distribution of vegetation types and associated phenological, hydrological and biogeochemical properties. BIOME4 output, constrained with the pollen observations, are off-line coupled using a particle filter technique.

  2. Use of regional climate model output for hydrologic simulations

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.; Wilby, R.L.; Gutowski, W.J.; Leavesley, G.H.; Pan, Z.; Arritt, R.W.; Takle, E.S.

    2002-01-01

    Daily precipitation and maximum and minimum temperature time series from a regional climate model (RegCM2) configured using the continental United States as a domain and run on a 52-km (approximately) spatial resolution were used as input to a distributed hydrologic model for one rainfall-dominated basin (Alapaha River at Statenville, Georgia) and three snowmelt-dominated basins (Animas River at Durango. Colorado; east fork of the Carson River near Gardnerville, Nevada: and Cle Elum River near Roslyn, Washington). For comparison purposes, spatially averaged daily datasets of precipitation and maximum and minimum temperature were developed from measured data for each basin. These datasets included precipitation and temperature data for all stations (hereafter, All-Sta) located within the area of the RegCM2 output used for each basin, but excluded station data used to calibrate the hydrologic model. Both the RegCM2 output and All-Sta data capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all four basins, the RegCM2- and All-Sta-based simulations of runoff show little skill on a daily basis [Nash-Sutcliffe (NS) values range from 0.05 to 0.37 for RegCM2 and -0.08 to 0.65 for All-Sta]. When the precipitation and temperature biases are corrected in the RegCM2 output and All-Sta data (Bias-RegCM2 and Bias-All, respectively) the accuracy of the daily runoff simulations improve dramatically for the snowmelt-dominated basins (NS values range from 0.41 to 0.66 for RegCM2 and 0.60 to 0.76 for All-Sta). In the rainfall-dominated basin, runoff simulations based on the Bias-RegCM2 output show no skill (NS value of 0.09) whereas Bias-All simulated runoff improves (NS value improved from - 0.08 to 0.72). These results indicate that measured data at the coarse resolution of the RegCM2 output can be made appropriate for basin-scale modeling through bias correction (essentially a magnitude correction). However, RegCM2 output, even when bias corrected, does not contain the day-to-day variability present in the All-Sta dataset that is necessary for basin-scale modeling. Future work is warranted to identify the causes for systematic biases in RegCM2 simulations, develop methods to remove the biases, and improve RegCM2 simulations of daily variability in local climate.

  3. Exploring the potential of machine learning to break deadlock in convection parameterization

    NASA Astrophysics Data System (ADS)

    Pritchard, M. S.; Gentine, P.

    2017-12-01

    We explore the potential of modern machine learning tools (via TensorFlow) to replace parameterization of deep convection in climate models. Our strategy begins by generating a large ( 1 Tb) training dataset from time-step level (30-min) output harvested from a one-year integration of a zonally symmetric, uniform-SST aquaplanet integration of the SuperParameterized Community Atmosphere Model (SPCAM). We harvest the inputs and outputs connecting each of SPCAM's 8,192 embedded cloud-resolving model (CRM) arrays to its host climate model's arterial thermodynamic state variables to afford 143M independent training instances. We demonstrate that this dataset is sufficiently large to induce preliminary convergence for neural network prediction of desired outputs of SP, i.e. CRM-mean convective heating and moistening profiles. Sensitivity of the machine learning convergence to the nuances of the TensorFlow implementation are discussed, as well as results from pilot tests from the neural network operating inline within the SPCAM as a replacement to the (super)parameterization of convection.

  4. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

    PubMed Central

    Chen, Sanfeng; Li, Shuai; Liu, Bo; Lou, Yuesheng; Liang, Yongsheng

    2012-01-01

    Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method. PMID:22778633

  5. Measurement problem and local hidden variables with entangled photons

    NASA Astrophysics Data System (ADS)

    Muchowski, Eugen

    2017-12-01

    It is shown that there is no remote action with polarization measurements of photons in singlet state. A model is presented introducing a hidden parameter which determines the polarizer output. This model is able to explain the polarization measurement results with entangled photons. It is not ruled out by Bell's Theorem.

  6. An econometric model of the hardwood lumber market

    Treesearch

    William G. Luppold

    1982-01-01

    A recursive econometric model with causal flow originating from the demand relationship is used to analyze the effects of exogenous variables on quantity and price of hardwood lumber. Wage rates, interest rates, stumpage price, lumber exports, and price of lumber demanders' output were the major factors influencing quantities demanded and supplied and hardwood...

  7. High dimensional model representation method for fuzzy structural dynamics

    NASA Astrophysics Data System (ADS)

    Adhikari, S.; Chowdhury, R.; Friswell, M. I.

    2011-03-01

    Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.

  8. Hidden Connections between Regression Models of Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert

    2013-01-01

    Hidden connections between regression models of wind tunnel strain-gage balance calibration data are investigated. These connections become visible whenever balance calibration data is supplied in its design format and both the Iterative and Non-Iterative Method are used to process the data. First, it is shown how the regression coefficients of the fitted balance loads of a force balance can be approximated by using the corresponding regression coefficients of the fitted strain-gage outputs. Then, data from the manual calibration of the Ames MK40 six-component force balance is chosen to illustrate how estimates of the regression coefficients of the fitted balance loads can be obtained from the regression coefficients of the fitted strain-gage outputs. The study illustrates that load predictions obtained by applying the Iterative or the Non-Iterative Method originate from two related regression solutions of the balance calibration data as long as balance loads are given in the design format of the balance, gage outputs behave highly linear, strict statistical quality metrics are used to assess regression models of the data, and regression model term combinations of the fitted loads and gage outputs can be obtained by a simple variable exchange.

  9. Geospatial modeling approach to monument construction using Michigan from A.D. 1000-1600 as a case study.

    PubMed

    Howey, Meghan C L; Palace, Michael W; McMichael, Crystal H

    2016-07-05

    Building monuments was one way that past societies reconfigured their landscapes in response to shifting social and ecological factors. Understanding the connections between those factors and monument construction is critical, especially when multiple types of monuments were constructed across the same landscape. Geospatial technologies enable past cultural activities and environmental variables to be examined together at large scales. Many geospatial modeling approaches, however, are not designed for presence-only (occurrence) data, which can be limiting given that many archaeological site records are presence only. We use maximum entropy modeling (MaxEnt), which works with presence-only data, to predict the distribution of monuments across large landscapes, and we analyze MaxEnt output to quantify the contributions of spatioenvironmental variables to predicted distributions. We apply our approach to co-occurring Late Precontact (ca. A.D. 1000-1600) monuments in Michigan: (i) mounds and (ii) earthwork enclosures. Many of these features have been destroyed by modern development, and therefore, we conducted archival research to develop our monument occurrence database. We modeled each monument type separately using the same input variables. Analyzing variable contribution to MaxEnt output, we show that mound and enclosure landscape suitability was driven by contrasting variables. Proximity to inland lakes was key to mound placement, and proximity to rivers was key to sacred enclosures. This juxtaposition suggests that mounds met local needs for resource procurement success, whereas enclosures filled broader regional needs for intergroup exchange and shared ritual. Our study shows how MaxEnt can be used to develop sophisticated models of past cultural processes, including monument building, with imperfect, limited, presence-only data.

  10. Modeling the milling tool wear by using an evolutionary SVM-based model from milling runs experimental data

    NASA Astrophysics Data System (ADS)

    Nieto, Paulino José García; García-Gonzalo, Esperanza; Vilán, José Antonio Vilán; Robleda, Abraham Segade

    2015-12-01

    The main aim of this research work is to build a new practical hybrid regression model to predict the milling tool wear in a regular cut as well as entry cut and exit cut of a milling tool. The model was based on Particle Swarm Optimization (PSO) in combination with support vector machines (SVMs). This optimization mechanism involved kernel parameter setting in the SVM training procedure, which significantly influences the regression accuracy. Bearing this in mind, a PSO-SVM-based model, which is based on the statistical learning theory, was successfully used here to predict the milling tool flank wear (output variable) as a function of the following input variables: the time duration of experiment, depth of cut, feed, type of material, etc. To accomplish the objective of this study, the experimental dataset represents experiments from runs on a milling machine under various operating conditions. In this way, data sampled by three different types of sensors (acoustic emission sensor, vibration sensor and current sensor) were acquired at several positions. A second aim is to determine the factors with the greatest bearing on the milling tool flank wear with a view to proposing milling machine's improvements. Firstly, this hybrid PSO-SVM-based regression model captures the main perception of statistical learning theory in order to obtain a good prediction of the dependence among the flank wear (output variable) and input variables (time, depth of cut, feed, etc.). Indeed, regression with optimal hyperparameters was performed and a determination coefficient of 0.95 was obtained. The agreement of this model with experimental data confirmed its good performance. Secondly, the main advantages of this PSO-SVM-based model are its capacity to produce a simple, easy-to-interpret model, its ability to estimate the contributions of the input variables, and its computational efficiency. Finally, the main conclusions of this study are exposed.

  11. School Mathematics Study Group, Unit Number Two. Chapter 3 - Informal Algorithms and Flow Charts. Chapter 4 - Applications and Mathematics Models.

    ERIC Educational Resources Information Center

    Stanford Univ., CA. School Mathematics Study Group.

    This is the second unit of a 15-unit School Mathematics Study Group (SMSG) mathematics text for high school students. Topics presented in the first chapter (Informal Algorithms and Flow Charts) include: changing a flat tire; algorithms, flow charts, and computers; assignment and variables; input and output; using a variable as a counter; decisions…

  12. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    USGS Publications Warehouse

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-01-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.

  13. Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.

    PubMed

    Gilson, Matthieu

    2018-04-01

    Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.

  14. Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook

    NASA Astrophysics Data System (ADS)

    Sobolev, Andrej M.; Gray, Malcolm D.

    2012-07-01

    Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.

  15. Identification and modeling of the electrohydraulic systems of the main gun of a main battle tank

    NASA Astrophysics Data System (ADS)

    Campos, Luiz C. A.; Menegaldo, Luciano L.

    2012-11-01

    The black-box mathematical models of the electrohydraulic systems responsible for driving the two degrees of freedom (elevation and azimuth) of the main gun of a main battle tank (MBT) were identified. Such systems respond to gunner's inputs while acquiring and tracking targets. Identification experiments were designed to collect simultaneous data from two inertial measurement units (IMU) installed at the gunner's handle (input) and at the center of rotation of the turret (output), for the identification of the azimuth system. For the elevation system, IMUs were installed at the gunner's handle (input) and at the breech of the gun (output). Linear accelerations and angular rates were collected for both input and output. Several black-box model architectures were investigated. As a result, nonlinear autoregressive with exogenous variables (NARX) second order model and nonlinear finite impulse response (NFIR) fourth order model, demonstrate to best fit the experimental data, with low computational costs. The derived models are being employed in a broader research, aiming to reproduce such systems in a laboratory virtual main gun simulator.

  16. Stochastic Simulation Tool for Aerospace Structural Analysis

    NASA Technical Reports Server (NTRS)

    Knight, Norman F.; Moore, David F.

    2006-01-01

    Stochastic simulation refers to incorporating the effects of design tolerances and uncertainties into the design analysis model and then determining their influence on the design. A high-level evaluation of one such stochastic simulation tool, the MSC.Robust Design tool by MSC.Software Corporation, has been conducted. This stochastic simulation tool provides structural analysts with a tool to interrogate their structural design based on their mathematical description of the design problem using finite element analysis methods. This tool leverages the analyst's prior investment in finite element model development of a particular design. The original finite element model is treated as the baseline structural analysis model for the stochastic simulations that are to be performed. A Monte Carlo approach is used by MSC.Robust Design to determine the effects of scatter in design input variables on response output parameters. The tool was not designed to provide a probabilistic assessment, but to assist engineers in understanding cause and effect. It is driven by a graphical-user interface and retains the engineer-in-the-loop strategy for design evaluation and improvement. The application problem for the evaluation is chosen to be a two-dimensional shell finite element model of a Space Shuttle wing leading-edge panel under re-entry aerodynamic loading. MSC.Robust Design adds value to the analysis effort by rapidly being able to identify design input variables whose variability causes the most influence in response output parameters.

  17. Towards simplification of hydrologic modeling: Identification of dominant processes

    USGS Publications Warehouse

    Markstrom, Steven; Hay, Lauren E.; Clark, Martyn P.

    2016-01-01

    The Precipitation–Runoff Modeling System (PRMS), a distributed-parameter hydrologic model, has been applied to the conterminous US (CONUS). Parameter sensitivity analysis was used to identify: (1) the sensitive input parameters and (2) particular model output variables that could be associated with the dominant hydrologic process(es). Sensitivity values of 35 PRMS calibration parameters were computed using the Fourier amplitude sensitivity test procedure on 110 000 independent hydrologically based spatial modeling units covering the CONUS and then summarized to process (snowmelt, surface runoff, infiltration, soil moisture, evapotranspiration, interflow, baseflow, and runoff) and model performance statistic (mean, coefficient of variation, and autoregressive lag 1). Identified parameters and processes provide insight into model performance at the location of each unit and allow the modeler to identify the most dominant process on the basis of which processes are associated with the most sensitive parameters. The results of this study indicate that: (1) the choice of performance statistic and output variables has a strong influence on parameter sensitivity, (2) the apparent model complexity to the modeler can be reduced by focusing on those processes that are associated with sensitive parameters and disregarding those that are not, (3) different processes require different numbers of parameters for simulation, and (4) some sensitive parameters influence only one hydrologic process, while others may influence many

  18. Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins

    NASA Astrophysics Data System (ADS)

    Kavka, Petr; Strouhal, Luděk; Landa, Martin; Neuman, Martin; Kožant, Petr; Muller, Miloslav

    2016-04-01

    The aim of this contribution is to introduce the recently started three year's project named "Variability of Short-term Precipitation and Runoff in Small Czech Drainage Basins and its Influence on Water Resources Management". Its main goal is to elaborate a methodology and online utility for deriving short-term design precipitation series, which could be utilized by a broad community of scientists, state administration as well as design planners. The outcomes of the project will especially be helpful in modelling hydrological or soil erosion problems when designing common measures for promoting water retention or landscape drainage systems in or out of the scope of Landscape consolidation projects. The precipitation scenarios will be derived from 10 years of observed data from point gauging stations and radar data. The analysis is focused on events' return period, rainfall total amount, internal intensity distribution and spatial distribution over the area of Czech Republic. The methodology will account for the choice of the simulation model. Several representatives of practically oriented models will be tested for the output sensitivity to selected precipitation scenario comparing to variability connected with other inputs uncertainty. The variability of the outputs will also be assessed in the context of economic impacts in design of landscape water structures or mitigation measures. The research was supported by the grant QJ1520265 of the Czech Ministry of Agriculture, using data provided by the Czech Hydrometeorological Institute.

  19. Uncertainty Quantification of Turbulence Model Closure Coefficients for Transonic Wall-Bounded Flows

    NASA Technical Reports Server (NTRS)

    Schaefer, John; West, Thomas; Hosder, Serhat; Rumsey, Christopher; Carlson, Jan-Renee; Kleb, William

    2015-01-01

    The goal of this work was to quantify the uncertainty and sensitivity of commonly used turbulence models in Reynolds-Averaged Navier-Stokes codes due to uncertainty in the values of closure coefficients for transonic, wall-bounded flows and to rank the contribution of each coefficient to uncertainty in various output flow quantities of interest. Specifically, uncertainty quantification of turbulence model closure coefficients was performed for transonic flow over an axisymmetric bump at zero degrees angle of attack and the RAE 2822 transonic airfoil at a lift coefficient of 0.744. Three turbulence models were considered: the Spalart-Allmaras Model, Wilcox (2006) k-w Model, and the Menter Shear-Stress Trans- port Model. The FUN3D code developed by NASA Langley Research Center was used as the flow solver. The uncertainty quantification analysis employed stochastic expansions based on non-intrusive polynomial chaos as an efficient means of uncertainty propagation. Several integrated and point-quantities are considered as uncertain outputs for both CFD problems. All closure coefficients were treated as epistemic uncertain variables represented with intervals. Sobol indices were used to rank the relative contributions of each closure coefficient to the total uncertainty in the output quantities of interest. This study identified a number of closure coefficients for each turbulence model for which more information will reduce the amount of uncertainty in the output significantly for transonic, wall-bounded flows.

  20. Boolean Modeling of Neural Systems with Point-Process Inputs and Outputs. Part I: Theory and Simulations

    PubMed Central

    Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.

    2010-01-01

    This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238

  1. Human θ burst stimulation enhances subsequent motor learning and increases performance variability.

    PubMed

    Teo, James T H; Swayne, Orlando B C; Cheeran, Binith; Greenwood, Richard J; Rothwell, John C

    2011-07-01

    Intermittent theta burst stimulation (iTBS) transiently increases motor cortex excitability in healthy humans by a process thought to involve synaptic long-term potentiation (LTP), and this is enhanced by nicotine. Acquisition of a ballistic motor task is likewise accompanied by increased excitability and presumed intracortical LTP. Here, we test how iTBS and nicotine influences subsequent motor learning. Ten healthy subjects participated in a double-blinded placebo-controlled trial testing the effects of iTBS and nicotine. iTBS alone increased the rate of learning but this increase was blocked by nicotine. We then investigated factors other than synaptic strengthening that may play a role. Behavioral analysis and modeling suggested that iTBS increased performance variability, which correlated with learning outcome. A control experiment confirmed the increase in motor output variability by showing that iTBS increased the dispersion of involuntary transcranial magnetic stimulation-evoked thumb movements. We suggest that in addition to the effect on synaptic plasticity, iTBS may have facilitated performance by increasing motor output variability; nicotine negated this effect on variability perhaps via increasing the signal-to-noise ratio in cerebral cortex.

  2. Data analytics using canonical correlation analysis and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  3. Mechanisms of long-term mean sea level variability in the North Sea

    NASA Astrophysics Data System (ADS)

    Dangendorf, Sönke; Calafat, Francisco; Øie Nilsen, Jan Even; Richter, Kristin; Jensen, Jürgen

    2015-04-01

    We examine mean sea level (MSL) variations in the North Sea on timescales ranging from months to decades under the consideration of different forcing factors since the late 19th century. We use multiple linear regression models, which are validated for the second half of the 20th century against the output of a state-of-the-art tide+surge model (HAMSOM), to determine the barotropic response of the ocean to fluctuations in atmospheric forcing. We demonstrate that local atmospheric forcing mainly triggers MSL variability on timescales up to a few years, with the inverted barometric effect dominating the variability along the UK and Norwegian coastlines and wind (piling up the water along the coast) controlling the MSL variability in the south from Belgium up to Denmark. However, in addition to the large inter-annual sea level variability there is also a considerable fraction of decadal scale variability. We show that on decadal timescales MSL variability in the North Sea mainly reflects steric changes, which are mostly remotely forced. A spatial correlation analysis of altimetry observations and baroclinic ocean model outputs suggests evidence for a coherent signal extending from the Norwegian shelf down to the Canary Islands. This supports the theory of longshore wind forcing along the eastern boundary of the North Atlantic causing coastally trapped waves to propagate along the continental slope. With a combination of oceanographic and meteorological measurements we demonstrate that ~80% of the decadal sea level variability in the North Sea can be explained as response of the ocean to longshore wind forcing, including boundary wave propagation in the Northeast Atlantic. These findings have important implications for (i) detecting significant accelerations in North Sea MSL, (ii) the conceptual set up of regional ocean models in terms of resolution and boundary conditions, and (iii) the development of adequate and realistic regional climate change projections.

  4. Modelling the distribution of chickens, ducks, and geese in China

    USGS Publications Warehouse

    Prosser, Diann J.; Wu, Junxi; Ellis, Erie C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China's chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for 1/4 of the sample data which were not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China's first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives.

  5. Modelling the distribution of chickens, ducks, and geese in China

    PubMed Central

    Prosser, Diann J.; Wu, Junxi; Ellis, Erle C.; Gale, Fred; Van Boeckel, Thomas P.; Wint, William; Robinson, Tim; Xiao, Xiangming; Gilbert, Marius

    2011-01-01

    Global concerns over the emergence of zoonotic pandemics emphasize the need for high-resolution population distribution mapping and spatial modelling. Ongoing efforts to model disease risk in China have been hindered by a lack of available species level distribution maps for poultry. The goal of this study was to develop 1 km resolution population density models for China’s chickens, ducks, and geese. We used an information theoretic approach to predict poultry densities based on statistical relationships between poultry census data and high-resolution agro-ecological predictor variables. Model predictions were validated by comparing goodness of fit measures (root mean square error and correlation coefficient) for observed and predicted values for ¼ of the sample data which was not used for model training. Final output included mean and coefficient of variation maps for each species. We tested the quality of models produced using three predictor datasets and 4 regional stratification methods. For predictor variables, a combination of traditional predictors for livestock mapping and land use predictors produced the best goodness of fit scores. Comparison of regional stratifications indicated that for chickens and ducks, a stratification based on livestock production systems produced the best results; for geese, an agro-ecological stratification produced best results. However, for all species, each method of regional stratification produced significantly better goodness of fit scores than the global model. Here we provide descriptive methods, analytical comparisons, and model output for China’s first high resolution, species level poultry distribution maps. Output will be made available to the scientific and public community for use in a wide range of applications from epidemiological studies to livestock policy and management initiatives. PMID:21765567

  6. UWB delay and multiply receiver

    DOEpatents

    Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.

    2013-09-10

    An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.

  7. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  8. A methodology for long range prediction of air transportation

    NASA Technical Reports Server (NTRS)

    Ayati, M. B.; English, J. M.

    1980-01-01

    The paper describes the methodology for long-time projection of aircraft fuel requirements. A new concept of social and economic factors for future aviation industry which provides an estimate of predicted fuel usage is presented; it includes air traffic forecasts and lead times for producing new engines and aircraft types. An air transportation model is then developed in terms of an abstracted set of variables which represent the entire aircraft industry on a macroscale. This model was evaluated by testing the required output variables from a model based on historical data over the past decades.

  9. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  10. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  11. "Development of an interactive crop growth web service architecture to review and forecast agricultural sustainability"

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.; Walden, V. P.

    2014-12-01

    As climate change and weather variability raise issues regarding agricultural production, agricultural sustainability has become an increasingly important component for farmland management (Fisher, 2005, Akinci, 2013). Yet with changes in soil quality, agricultural practices, weather, topography, land use, and hydrology - accurately modeling such agricultural outcomes has proven difficult (Gassman et al, 2007, Williams et al, 1995). This study examined agricultural sustainability and soil health over a heterogeneous multi-watershed area within the Inland Pacific Northwest of the United States (IPNW) - as part of a five year, USDA funded effort to explore the sustainability of cereal production systems (Regional Approaches to Climate Change for Pacific Northwest Agriculture - award #2011-68002-30191). In particular, crop growth and soil erosion were simulated across a spectrum of variables and time periods - using the CropSyst crop growth model (Stockle et al, 2002) and the Water Erosion Protection Project Model (WEPP - Flanagan and Livingston, 1995), respectively. A preliminary range of historical scenarios were run, using a high-resolution, 4km gridded dataset of surface meteorological variables from 1979-2010 (Abatzoglou, 2012). In addition, Coupled Model Inter-comparison Project (CMIP5) global climate model (GCM) outputs were used as input to run crop growth model and erosion future scenarios (Abatzoglou and Brown, 2011). To facilitate our integrated data analysis efforts, an agricultural sustainability web service architecture (THREDDS/Java/Python based) is under development, to allow for the programmatic uploading, sharing and processing of variable input data, running model simulations, as well as downloading and visualizing output results. The results of this study will assist in better understanding agricultural sustainability and erosion relationships in the IPNW, as well as provide a tangible server-based tool for use by researchers and farmers - for both small scale field examination, or more regionalized scenarios.

  12. An improved state-parameter analysis of ecosystem models using data assimilation

    USGS Publications Warehouse

    Chen, M.; Liu, S.; Tieszen, L.L.; Hollinger, D.Y.

    2008-01-01

    Much of the effort spent in developing data assimilation methods for carbon dynamics analysis has focused on estimating optimal values for either model parameters or state variables. The main weakness of estimating parameter values alone (i.e., without considering state variables) is that all errors from input, output, and model structure are attributed to model parameter uncertainties. On the other hand, the accuracy of estimating state variables may be lowered if the temporal evolution of parameter values is not incorporated. This research develops a smoothed ensemble Kalman filter (SEnKF) by combining ensemble Kalman filter with kernel smoothing technique. SEnKF has following characteristics: (1) to estimate simultaneously the model states and parameters through concatenating unknown parameters and state variables into a joint state vector; (2) to mitigate dramatic, sudden changes of parameter values in parameter sampling and parameter evolution process, and control narrowing of parameter variance which results in filter divergence through adjusting smoothing factor in kernel smoothing algorithm; (3) to assimilate recursively data into the model and thus detect possible time variation of parameters; and (4) to address properly various sources of uncertainties stemming from input, output and parameter uncertainties. The SEnKF is tested by assimilating observed fluxes of carbon dioxide and environmental driving factor data from an AmeriFlux forest station located near Howland, Maine, USA, into a partition eddy flux model. Our analysis demonstrates that model parameters, such as light use efficiency, respiration coefficients, minimum and optimum temperatures for photosynthetic activity, and others, are highly constrained by eddy flux data at daily-to-seasonal time scales. The SEnKF stabilizes parameter values quickly regardless of the initial values of the parameters. Potential ecosystem light use efficiency demonstrates a strong seasonality. Results show that the simultaneous parameter estimation procedure significantly improves model predictions. Results also show that the SEnKF can dramatically reduce the variance in state variables stemming from the uncertainty of parameters and driving variables. The SEnKF is a robust and effective algorithm in evaluating and developing ecosystem models and in improving the understanding and quantification of carbon cycle parameters and processes. ?? 2008 Elsevier B.V.

  13. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    PubMed Central

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-01-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit. PMID:26498395

  14. Surface Water and Energy Budgets for Sub-Saharan Africa in GFDL Coupled Climate Model

    NASA Astrophysics Data System (ADS)

    Tian, D.; Wood, E. F.; Vecchi, G. A.; Jia, L.; Pan, M.

    2015-12-01

    This study compare surface water and energy budget variables from the Geophysical Fluid Dynamics Laboratory (GFDL) FLOR models with the National Centers for Environmental Prediction (NCEP) Climate Forecast System Reanalysis (CFSR), Princeton University Global Meteorological Forcing Dataset (PGF), and PGF-driven Variable Infiltration Capacity (VIC) model outputs, as well as available observations over the sub-Saharan Africa. The comparison was made for four configurations of the FLOR models that included FLOR phase 1 (FLOR-p1) and phase 2 (FLOR-p2) and two phases of flux adjusted versions (FLOR-FA-p1 and FLOR-FA-p2). Compared to p1, simulated atmospheric states in p2 were nudged to the Modern-Era Retrospective Analysis for Research and Applications (MERRA) reanalysis. The seasonal cycle and annual mean of major surface water (precipitation, evapotranspiration, runoff, and change of storage) and energy variables (sensible heat, ground heat, latent heat, net solar radiation, net longwave radiation, and skin temperature) over a 34-yr period during 1981-2014 were compared in different regions in sub-Saharan Africa (West Africa, East Africa, and Southern Africa). In addition to evaluating the means in three sub-regions, empirical orthogonal functions (EOFs) analyses were conducted to compare both spatial and temporal characteristics of water and energy budget variables from four versions of GFDL FLOR, NCEP CFSR, PGF, and VIC outputs. This presentation will show how well each coupled climate model represented land surface physics and reproduced spatiotemporal characteristics of surface water and energy budget variables. We discuss what caused differences in surface water and energy budgets in land surface components of coupled climate model, climate reanalysis, and reanalysis driven land surface model. The comparisons will reveal whether flux adjustment and nudging would improve depiction of the surface water and energy budgets in coupled climate models.

  15. Improved first-order uncertainty method for water-quality modeling

    USGS Publications Warehouse

    Melching, C.S.; Anmangandla, S.

    1992-01-01

    Uncertainties are unavoidable in water-quality modeling and subsequent management decisions. Monte Carlo simulation and first-order uncertainty analysis (involving linearization at central values of the uncertain variables) have been frequently used to estimate probability distributions for water-quality model output due to their simplicity. Each method has its drawbacks: Monte Carlo simulation's is mainly computational time; and first-order analysis are mainly questions of accuracy and representativeness, especially for nonlinear systems and extreme conditions. An improved (advanced) first-order method is presented, where the linearization point varies to match the output level whose exceedance probability is sought. The advanced first-order method is tested on the Streeter-Phelps equation to estimate the probability distribution of critical dissolved-oxygen deficit and critical dissolved oxygen using two hypothetical examples from the literature. The advanced first-order method provides a close approximation of the exceedance probability for the Streeter-Phelps model output estimated by Monte Carlo simulation using less computer time - by two orders of magnitude - regardless of the probability distributions assumed for the uncertain model parameters.

  16. GMLC Extreme Event Modeling -- Slow-Dynamics Models for Renewable Energy Resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Korkali, M.; Min, L.

    The need for slow dynamics models of renewable resources in cascade modeling essentially arises from the challenges associated with the increased use of solar and wind electric power. Indeed, the main challenge is that the power produced by wind and sunlight is not consistent; thus, renewable energy resources tend to have variable output power on many different timescales, including the timescales that a cascade unfolds.

  17. Dry-bean production under climate change conditions in the north of Argentina: Risk assessment and economic implications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feijoo, M.; Mestre, F.; Castagnaro, A.

    This study evaluates the potential effect of climate change on Dry-bean production in Argentina, combining climate models, a crop productivity model and a yield response model estimation of climate variables on crop yields. The study was carried out in the North agricultural regions of Jujuy, Salta, Santiago del Estero and Tucuman which include the largest areas of Argentina where dry beans are grown as a high input crop. The paper combines the output from a crop model with different techniques of analysis. The scenarios used in this study were generated from the output of two General Circulation Models (GCMs): themore » Goddard Institute for Space Studies model (GISS) and the Canadian Climate Change Model (CCCM). The study also includes a preliminary evaluation of the potential changes in monetary returns taking into account the possible variability of yields and prices, using mean-Gini stochastic dominance (MGSD). The results suggest that large climate change may have a negative impact on the Argentine agriculture sector, due to the high relevance of this product in the export sector. The difference negative effect depends on the varieties of dry bean and also the General Circulation Model scenarios considered for double levels of atmospheric carbon dioxide.« less

  18. A variable-gain output feedback control design approach

    NASA Technical Reports Server (NTRS)

    Haylo, Nesim

    1989-01-01

    A multi-model design technique to find a variable-gain control law defined over the whole operating range is proposed. The design is formulated as an optimal control problem which minimizes a cost function weighing the performance at many operating points. The solution is obtained by embedding into the Multi-Configuration Control (MCC) problem, a multi-model robust control design technique. In contrast to conventional gain scheduling which uses a curve fit of single model designs, the optimal variable-gain control law stabilizes the plant at every operating point included in the design. An iterative algorithm to compute the optimal control gains is presented. The methodology has been successfully applied to reconfigurable aircraft flight control and to nonlinear flight control systems.

  19. Bias correction of temperature produced by the Community Climate System Model using Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Moghim, S.; Hsu, K.; Bras, R. L.

    2013-12-01

    General Circulation Models (GCMs) are used to predict circulation and energy transfers between the atmosphere and the land. It is known that these models produce biased results that will have impact on their uses. This work proposes a new method for bias correction: the equidistant cumulative distribution function-artificial neural network (EDCDFANN) procedure. The method uses artificial neural networks (ANNs) as a surrogate model to estimate bias-corrected temperature, given an identification of the system derived from GCM models output variables. A two-layer feed forward neural network is trained with observations during a historical period and then the adjusted network can be used to predict bias-corrected temperature for future periods. To capture the extreme values this method is combined with the equidistant CDF matching method (EDCDF, Li et al. 2010). The proposed method is tested with the Community Climate System Model (CCSM3) outputs using air and skin temperature, specific humidity, shortwave and longwave radiation as inputs to the ANN. This method decreases the mean square error and increases the spatial correlation between the modeled temperature and the observed one. The results indicate the EDCDFANN has potential to remove the biases of the model outputs.

  20. Correction of I/Q channel errors without calibration

    DOEpatents

    Doerry, Armin W.; Tise, Bertice L.

    2002-01-01

    A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.

  1. The contribution of natural variability to GCM bias: Can we effectively bias-correct climate projections?

    NASA Astrophysics Data System (ADS)

    McAfee, S. A.; DeLaFrance, A.

    2017-12-01

    Investigating the impacts of climate change often entails using projections from inherently imperfect general circulation models (GCMs) to drive models that simulate biophysical or societal systems in great detail. Error or bias in the GCM output is often assessed in relation to observations, and the projections are adjusted so that the output from impacts models can be compared to historical or observed conditions. Uncertainty in the projections is typically accommodated by running more than one future climate trajectory to account for differing emissions scenarios, model simulations, and natural variability. The current methods for dealing with error and uncertainty treat them as separate problems. In places where observed and/or simulated natural variability is large, however, it may not be possible to identify a consistent degree of bias in mean climate, blurring the lines between model error and projection uncertainty. Here we demonstrate substantial instability in mean monthly temperature bias across a suite of GCMs used in CMIP5. This instability is greatest in the highest latitudes during the cool season, where shifts from average temperatures below to above freezing could have profound impacts. In models with the greatest degree of bias instability, the timing of regional shifts from below to above average normal temperatures in a single climate projection can vary by about three decades, depending solely on the degree of bias assessed. This suggests that current bias correction methods based on comparison to 20- or 30-year normals may be inappropriate, particularly in the polar regions.

  2. A Primary Care Workload Production Model for Estimating Relative Value Unit Output

    DTIC Science & Technology

    2011-03-01

    for Medicare and Medicaid Services, Office of the Actuary , National Health Statistics Group; and U.S. Department of Commerce, Bureau of Economic...The systematic variation in a relationship can be represented by a mathematical expression, whereas stochastic variation cannot. Further, stochastic...expressed mathematically as an equation, whereby a response variable Y is fitted to a function of “regressor variables and parameters” (SAS©, 2010). A

  3. A Simple Model of the Pulmonary Circulation for Hemodynamic Study and Examination.

    ERIC Educational Resources Information Center

    Gaar, Kermit A., Jr.

    1983-01-01

    Describes a computer program allowing students to study such circulatory variables as venus return, cardiac output, mean circulatory filling pressure, resistance to venous return, and equilibrium point. Documentation for this Applesoft program (or diskette) is available from author. (JM)

  4. Inferential consequences of modeling rather than measuring snow accumulation in studies of animal ecology

    USGS Publications Warehouse

    Cross, Paul C.; Klaver, Robert W.; Brennan, Angela; Creel, Scott; Beckmann, Jon P.; Higgs, Megan D.; Scurlock, Brandon M.

    2013-01-01

    Abstract. It is increasingly common for studies of animal ecology to use model-based predictions of environmental variables as explanatory or predictor variables, even though model prediction uncertainty is typically unknown. To demonstrate the potential for misleading inferences when model predictions with error are used in place of direct measurements, we compared snow water equivalent (SWE) and snow depth as predicted by the Snow Data Assimilation System (SNODAS) to field measurements of SWE and snow depth. We examined locations on elk (Cervus canadensis) winter ranges in western Wyoming, because modeled data such as SNODAS output are often used for inferences on elk ecology. Overall, SNODAS predictions tended to overestimate field measurements, prediction uncertainty was high, and the difference between SNODAS predictions and field measurements was greater in snow shadows for both snow variables compared to non-snow shadow areas. We used a simple simulation of snow effects on the probability of an elk being killed by a predator to show that, if SNODAS prediction uncertainty was ignored, we might have mistakenly concluded that SWE was not an important factor in where elk were killed in predatory attacks during the winter. In this simulation, we were interested in the effects of snow at finer scales (2) than the resolution of SNODAS. If bias were to decrease when SNODAS predictions are averaged over coarser scales, SNODAS would be applicable to population-level ecology studies. In our study, however, averaging predictions over moderate to broad spatial scales (9–2200 km2) did not reduce the differences between SNODAS predictions and field measurements. This study highlights the need to carefully evaluate two issues when using model output as an explanatory variable in subsequent analysis: (1) the model’s resolution relative to the scale of the ecological question of interest and (2) the implications of prediction uncertainty on inferences when using model predictions as explanatory or predictor variables.

  5. Evaluation of simulated ocean carbon in the CMIP5 earth system models

    NASA Astrophysics Data System (ADS)

    Orr, James; Brockmann, Patrick; Seferian, Roland; Servonnat, Jérôme; Bopp, Laurent

    2013-04-01

    We maintain a centralized model output archive containing output from the previous generation of Earth System Models (ESMs), 7 models used in the IPCC AR4 assessment. Output is in a common format located on a centralized server and is publicly available through a web interface. Through the same interface, LSCE/IPSL has also made available output from the Coupled Model Intercomparison Project (CMIP5), the foundation for the ongoing IPCC AR5 assessment. The latter includes ocean biogeochemical fields from more than 13 ESMs. Modeling partners across 3 EU projects refer to the combined AR4-AR5 archive and comparison as OCMIP5, building on previous phases of OCMIP (Ocean Carbon Cycle Intercomparison Project) and making a clear link to IPCC AR5 (CMIP5). While now focusing on assessing the latest generation of results (AR5, CMIP5), this effort is also able to put them in context (AR4). For model comparison and evaluation, we have also stored computed derived variables (e.g., those needed to assess ocean acidification) and key fields regridded to a common 1°x1° grid, thus complementing the standard CMIP5 archive. The combined AR4-AR5 output (OCMIP5) has been used to compute standard quantitative metrics, both global and regional, and those have been synthesized with summary diagrams. In addition, for key biogeochemical fields we have deconvolved spatiotemporal components of the mean square error in order to constrain which models go wrong where. Here we will detail results from these evaluations which have exploited gridded climatological data. The archive, interface, and centralized evaluation provide a solid technical foundation, upon which collaboration and communication is being broadened in the ocean biogeochemical modeling community. Ultimately we aim to encourage wider use of the OCMIP5 archive.

  6. Observational Diagnoses of Extratropical Ozone STE During the Aura Era

    NASA Technical Reports Server (NTRS)

    Olsen, Mark A.; Douglass, Anne R.; Witte, Jacquie C.; Kaplan, Trevor B.

    2011-01-01

    The transport of ozone from the stratosphere to the extratropical troposphere is an important boundary condition to tropospheric chemistry. However, previous direct estimates from models and indirect estimates from observations have poorly constrained the magnitude of ozone stratosphere-troposphere exchange (STE). In this study we provide a direct diagnosis of the extratropical ozone STE using data from the Microwave Limb Sounder on Aura and output of the MERRA reanalysis over the time period from 2005 to the present. We find that the mean annual STE is about 275 Tg/yr and 205 Tg/yr in the NH and SH, respectively. The interannual variability of the magnitude is about twice as great in the NH than the SH. We find that this variability is dominated by the seasonal variability during the late winter and spring. A comparison of the ozone flux to the mass flux reveals that there is not a simple relationship between the two quantities. This presentation will also examine the magnitude and distribution of ozone in the lower stratosphere relative to the years of maximum and minimum ozone STE. Finally, we will examine any possible signature of increased ozone STE in the troposphere using sonde and tropospheric ozone residual (TOR) data, and output from the Global Modeling Initiative Chemistry Transport Model (GMI CTM).

  7. A user interface for the Kansas Geological Survey slug test model.

    PubMed

    Esling, Steven P; Keller, John E

    2009-01-01

    The Kansas Geological Survey (KGS) developed a semianalytical solution for slug tests that incorporates the effects of partial penetration, anisotropy, and the presence of variable conductivity well skins. The solution can simulate either confined or unconfined conditions. The original model, written in FORTRAN, has a text-based interface with rigid input requirements and limited output options. We re-created the main routine for the KGS model as a Visual Basic macro that runs in most versions of Microsoft Excel and built a simple-to-use Excel spreadsheet interface that automatically displays the graphical results of the test. A comparison of the output from the original FORTRAN code to that of the new Excel spreadsheet version for three cases produced identical results.

  8. A new adaptive estimation method of spacecraft thermal mathematical model with an ensemble Kalman filter

    NASA Astrophysics Data System (ADS)

    Akita, T.; Takaki, R.; Shima, E.

    2012-04-01

    An adaptive estimation method of spacecraft thermal mathematical model is presented. The method is based on the ensemble Kalman filter, which can effectively handle the nonlinearities contained in the thermal model. The state space equations of the thermal mathematical model is derived, where both temperature and uncertain thermal characteristic parameters are considered as the state variables. In the method, the thermal characteristic parameters are automatically estimated as the outputs of the filtered state variables, whereas, in the usual thermal model correlation, they are manually identified by experienced engineers using trial-and-error approach. A numerical experiment of a simple small satellite is provided to verify the effectiveness of the presented method.

  9. Large eddy simulations and reduced models of the Unsteady Atmospheric Boundary Layer

    NASA Astrophysics Data System (ADS)

    Momen, M.; Bou-Zeid, E.

    2013-12-01

    Most studies of the dynamics of Atmospheric Boundary Layers (ABLs) have focused on steady geostrophic conditions, such as the classic Ekman boundary layer problem. However, real-world ABLs are driven by a time-dependent geostrophic forcing that changes at sub-diurnal scales. Hence, to advance our understanding of the dynamics of atmospheric flows, and to improve their modeling, the unsteady cases have to be analyzed and understood. This is particularly relevant to new applications related to wind energy (e.g. short-term forecast of wind power changes) and pollutant dispersion (forecasting of rapid changes in wind velocity and direction after an accidental spill), as well as to classic weather prediction and hydrometeorological applications. The present study aims to investigate the ABL behavior under variable forcing and to derive a simple model to predict the ABL response under these forcing fluctuations. Simplifications of the governing Navier-Stokes equations, with the Coriolis force, are tested using LES and then applied to derive a physical model of the unsteady ABL. LES is then exploited again to validate the analogy and the output of the simpler model. Results from the analytical model, as well as LES outputs, open the way for inertial oscillations to play an important role in the dynamics. Several simulations with different variable forcing patterns are then conducted to investigate some of the characteristics of the unsteady ABL such as resonant frequency, ABL response time, equilibrium states, etc. The variability of wind velocity profiles and hodographs, turbulent kinetic energy, and vertical profiles of the total stress and potential temperature are also examined. Wind Hodograph of the Unsteady ABL at Different Heights - This figure shows fluctuations in the mean u and v components of the velocity as time passes due to variable geostrophic forcing

  10. The N-BOD2 user's and programmer's manual

    NASA Technical Reports Server (NTRS)

    Frisch, H. P.

    1978-01-01

    A general purpose digital computer program was developed and designed to aid in the analysis of spacecraft attitude dynamics. The program provides the analyst with the capability of automatically deriving and numerically solving the equations of motion of any system that can be modeled as a topological tree of coupled rigid bodies, flexible bodies, point masses, and symmetrical momentum wheels. Two modes of output are available. The composite system equations of motion may be outputted on a line printer in a symbolic form that may be easily translated into common vector-dyadic notation, or the composite system equations of motion may be solved numerically and any desirable set of system state variables outputted as a function of time.

  11. Grey-box state-space identification of nonlinear mechanical vibrations

    NASA Astrophysics Data System (ADS)

    Noël, J. P.; Schoukens, J.

    2018-05-01

    The present paper deals with the identification of nonlinear mechanical vibrations. A grey-box, or semi-physical, nonlinear state-space representation is introduced, expressing the nonlinear basis functions using a limited number of measured output variables. This representation assumes that the observed nonlinearities are localised in physical space, which is a generic case in mechanics. A two-step identification procedure is derived for the grey-box model parameters, integrating nonlinear subspace initialisation and weighted least-squares optimisation. The complete procedure is applied to an electrical circuit mimicking the behaviour of a single-input, single-output (SISO) nonlinear mechanical system and to a single-input, multiple-output (SIMO) geometrically nonlinear beam structure.

  12. A downscaling method for the assessment of local climate change

    NASA Astrophysics Data System (ADS)

    Bruno, E.; Portoghese, I.; Vurro, M.

    2009-04-01

    The use of complimentary models is necessary to study the impact of climate change scenarios on the hydrological response at different space-time scales. However, the structure of GCMs is such that their space resolution (hundreds of kilometres) is too coarse and not adequate to describe the variability of extreme events at basin scale (Burlando and Rosso, 2002). To bridge the space-time gap between the climate scenarios and the usual scale of the inputs for hydrological prediction models is a fundamental requisite for the evaluation of climate change impacts on water resources. Since models operate a simplification of a complex reality, their results cannot be expected to fit with climate observations. Identifying local climate scenarios for impact analysis implies the definition of more detailed local scenario by downscaling GCMs or RCMs results. Among the output correction methods we consider the statistical approach by Déqué (2007) reported as a ‘Variable correction method' in which the correction of model outputs is obtained by a function build with the observation dataset and operating a quantile-quantile transformation (Q-Q transform). However, in the case of daily precipitation fields the Q-Q transform is not able to correct the temporal property of the model output concerning the dry-wet lacunarity process. An alternative correction method is proposed based on a stochastic description of the arrival-duration-intensity processes in coherence with the Poissonian Rectangular Pulse scheme (PRP) (Eagleson, 1972). In this proposed approach, the Q-Q transform is applied to the PRP variables derived from the daily rainfall datasets. Consequently the corrected PRP parameters are used for the synthetic generation of statistically homogeneous rainfall time series that mimic the persistency of daily observations for the reference period. Then the PRP parameters are forced through the GCM scenarios to generate local scale rainfall records for the 21st century. The statistical parameters characterizing daily storm occurrence, storm intensity and duration needed to apply the PRP scheme are considered among STARDEX collection of extreme indices.

  13. The influence of lower leg configurations on muscle force variability.

    PubMed

    Ofori, Edward; Shim, Jaeho; Sosnoff, Jacob J

    2018-04-11

    The maintenance of steady contractions is required in many daily tasks. However, there is little understanding of how various lower limb configurations influence the ability to maintain force. The purpose of the current investigation was to examine the influence of joint angle on various lower-limb constant force contractions. Nineteen adults performed knee extension, knee flexion, and ankle plantarflexion isometric force contractions to 11 target forces, ranging from 2 to 95% maximal voluntary contraction (MVC) at 2 angles. Force variability was quantified with mean force, standard deviation, and the coefficient of variation of force output. Non-linearities in force output were quantified with approximate entropy. Curve fitting analyses were performed on each set of data from each individual across contractions to further examine whether joint angle interacts with global functions of lower-limb force variability. Joint angle had significant effects on the model parameters used to describe the force-variability function for each muscle contraction (p < 0.05). Regularities in force output were more explained by force level in smaller angle conditions relative to the larger angle conditions (p < 0.05). The findings support the notion that limb configuration influences the magnitude and regularities in force production. Biomechanical factors, such as joint angle, along with neurophysiological factors should be considered together in the discussion of the dynamics of constant force production. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Spatiotemporal variability of biogenic terpenoid emissions in Pearl River Delta, China, with high-resolution land-cover and meteorological data

    NASA Astrophysics Data System (ADS)

    Wang, Xuemei; Situ, Shuping; Guenther, Alex; Chen, Fei; Wu, Zhiyong; Xia, Beicheng; Wang, Tijian

    2011-04-01

    This study intended to provide 4-km gridded, hourly, year-long, regional estimates of terpenoid emissions in the Pearl River Delta (PRD), China. It combined Thematic Mapper images and local-survey data to characterize plant functional types, and used observed emission potential of biogenic volatile organic compounds (BVOC) from local plant species and high-resolution meteorological outputs from the MM5 model to constrain the MEGAN BVOC-emission model. The estimated annual emissions for isoprene, monoterpene and sesquiterpene are 95.55 × 106 kg C, 117.35 × 106 kg C and 9.77 × 106 kg C, respectively. The results show strong variabilities of terpenoid emissions spanning diurnal and seasonal time scales, which are mainly distributed in the remote areas (with more vegetation and less economic development) in PRD. Using MODIS PFTs data reduced terpenoid emissions by 27% in remote areas. Using MEGAN-model default emission factors led to a 24% increase in BVOC emission. The model errors of temperature and radiation in MM5 output were used to assess impacts of uncertainties in meteorological forcing on emissions: increasing (decreasing) temperature and downward shortwave radiation produces more (less) terpenoid emissions for July and January. Strong temporal variability of terpenoid emissions leads to enhanced ozone formation during midday in rural areas where the anthropogenic VOC emissions are limited.

  15. Specification and Verification of Medical Monitoring System Using Petri-nets.

    PubMed

    Majma, Negar; Babamir, Seyed Morteza

    2014-07-01

    To monitor the patient behavior, data are collected from patient's body by a medical monitoring device so as to calculate the output using embedded software. Incorrect calculations may endanger the patient's life if the software fails to meet the patient's requirements. Accordingly, the veracity of the software behavior is a matter of concern in the medicine; moreover, the data collected from the patient's body are fuzzy. Some methods have already dealt with monitoring the medical monitoring devices; however, model based monitoring fuzzy computations of such devices have been addressed less. The present paper aims to present synthesizing a fuzzy Petri-net (FPN) model to verify behavior of a sample medical monitoring device called continuous infusion insulin (INS) because Petri-net (PN) is one of the formal and visual methods to verify the software's behavior. The device is worn by the diabetic patients and then the software calculates the INS dose and makes a decision for injection. The input and output of the infusion INS software are not crisp in the real world; therefore, we present them in fuzzy variables. Afterwards, we use FPN instead of clear PN to model the fuzzy variables. The paper follows three steps to synthesize an FPN to deal with verification of the infusion INS device: (1) Definition of fuzzy variables, (2) definition of fuzzy rules and (3) design of the FPN model to verify the software behavior.

  16. New distributed fusion filtering algorithm based on covariances over sensor networks with random packet dropouts

    NASA Astrophysics Data System (ADS)

    Caballero-Águila, R.; Hermoso-Carazo, A.; Linares-Pérez, J.

    2017-07-01

    This paper studies the distributed fusion estimation problem from multisensor measured outputs perturbed by correlated noises and uncertainties modelled by random parameter matrices. Each sensor transmits its outputs to a local processor over a packet-erasure channel and, consequently, random losses may occur during transmission. Different white sequences of Bernoulli variables are introduced to model the transmission losses. For the estimation, each lost output is replaced by its estimator based on the information received previously, and only the covariances of the processes involved are used, without requiring the signal evolution model. First, a recursive algorithm for the local least-squares filters is derived by using an innovation approach. Then, the cross-correlation matrices between any two local filters is obtained. Finally, the distributed fusion filter weighted by matrices is obtained from the local filters by applying the least-squares criterion. The performance of the estimators and the influence of both sensor uncertainties and transmission losses on the estimation accuracy are analysed in a numerical example.

  17. Can bias correction and statistical downscaling methods improve the skill of seasonal precipitation forecasts?

    NASA Astrophysics Data System (ADS)

    Manzanas, R.; Lucero, A.; Weisheimer, A.; Gutiérrez, J. M.

    2018-02-01

    Statistical downscaling methods are popular post-processing tools which are widely used in many sectors to adapt the coarse-resolution biased outputs from global climate simulations to the regional-to-local scale typically required by users. They range from simple and pragmatic Bias Correction (BC) methods, which directly adjust the model outputs of interest (e.g. precipitation) according to the available local observations, to more complex Perfect Prognosis (PP) ones, which indirectly derive local predictions (e.g. precipitation) from appropriate upper-air large-scale model variables (predictors). Statistical downscaling methods have been extensively used and critically assessed in climate change applications; however, their advantages and limitations in seasonal forecasting are not well understood yet. In particular, a key problem in this context is whether they serve to improve the forecast quality/skill of raw model outputs beyond the adjustment of their systematic biases. In this paper we analyze this issue by applying two state-of-the-art BC and two PP methods to downscale precipitation from a multimodel seasonal hindcast in a challenging tropical region, the Philippines. To properly assess the potential added value beyond the reduction of model biases, we consider two validation scores which are not sensitive to changes in the mean (correlation and reliability categories). Our results show that, whereas BC methods maintain or worsen the skill of the raw model forecasts, PP methods can yield significant skill improvement (worsening) in cases for which the large-scale predictor variables considered are better (worse) predicted by the model than precipitation. For instance, PP methods are found to increase (decrease) model reliability in nearly 40% of the stations considered in boreal summer (autumn). Therefore, the choice of a convenient downscaling approach (either BC or PP) depends on the region and the season.

  18. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    NASA Astrophysics Data System (ADS)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-01

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. About 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). The relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.

  19. Study on optimization of the short-term operation of cascade hydropower stations by considering output error

    NASA Astrophysics Data System (ADS)

    Wang, Liping; Wang, Boquan; Zhang, Pu; Liu, Minghao; Li, Chuangang

    2017-06-01

    The study of reservoir deterministic optimal operation can improve the utilization rate of water resource and help the hydropower stations develop more reasonable power generation schedules. However, imprecise forecasting inflow may lead to output error and hinder implementation of power generation schedules. In this paper, output error generated by the uncertainty of the forecasting inflow was regarded as a variable to develop a short-term reservoir optimal operation model for reducing operation risk. To accomplish this, the concept of Value at Risk (VaR) was first applied to present the maximum possible loss of power generation schedules, and then an extreme value theory-genetic algorithm (EVT-GA) was proposed to solve the model. The cascade reservoirs of Yalong River Basin in China were selected as a case study to verify the model, according to the results, different assurance rates of schedules can be derived by the model which can present more flexible options for decision makers, and the highest assurance rate can reach 99%, which is much higher than that without considering output error, 48%. In addition, the model can greatly improve the power generation compared with the original reservoir operation scheme under the same confidence level and risk attitude. Therefore, the model proposed in this paper can significantly improve the effectiveness of power generation schedules and provide a more scientific reference for decision makers.

  20. Uncertainty Quantification of the FUN3D-Predicted NASA CRM Flutter Boundary

    NASA Technical Reports Server (NTRS)

    Stanford, Bret K.; Massey, Steven J.

    2017-01-01

    A nonintrusive point collocation method is used to propagate parametric uncertainties of the flexible Common Research Model, a generic transport configuration, through the unsteady aeroelastic CFD solver FUN3D. A range of random input variables are considered, including atmospheric flow variables, structural variables, and inertial (lumped mass) variables. UQ results are explored for a range of output metrics (with a focus on dynamic flutter stability), for both subsonic and transonic Mach numbers, for two different CFD mesh refinements. A particular focus is placed on computing failure probabilities: the probability that the wing will flutter within the flight envelope.

  1. Delay correlation analysis and representation for vital complaint VHDL models

    DOEpatents

    Rich, Marvin J.; Misra, Ashutosh

    2004-11-09

    A method and system unbind a rise/fall tuple of a VHDL generic variable and create rise time and fall time generics of each generic variable that are independent of each other. Then, according to a predetermined correlation policy, the method and system collect delay values in a VHDL standard delay file, sort the delay values, remove duplicate delay values, group the delay values into correlation sets, and output an analysis file. The correlation policy may include collecting all generic variables in a VHDL standard delay file, selecting each generic variable, and performing reductions on the set of delay values associated with each selected generic variable.

  2. Control design methods for floating wind turbines for optimal disturbance rejection

    NASA Astrophysics Data System (ADS)

    Lemmer, Frank; Schlipf, David; Cheng, Po Wen

    2016-09-01

    An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.

  3. Hierarchical stochastic modeling of large river ecosystems and fish growth across spatio-temporal scales and climate models: the Missouri River endangered pallid sturgeon example

    USGS Publications Warehouse

    Wildhaber, Mark L.; Wikle, Christopher K.; Moran, Edward H.; Anderson, Christopher J.; Franz, Kristie J.; Dey, Rima

    2017-01-01

    We present a hierarchical series of spatially decreasing and temporally increasing models to evaluate the uncertainty in the atmosphere – ocean global climate model (AOGCM) and the regional climate model (RCM) relative to the uncertainty in the somatic growth of the endangered pallid sturgeon (Scaphirhynchus albus). For effects on fish populations of riverine ecosystems, cli- mate output simulated by coarse-resolution AOGCMs and RCMs must be downscaled to basins to river hydrology to population response. One needs to transfer the information from these climate simulations down to the individual scale in a way that minimizes extrapolation and can account for spatio-temporal variability in the intervening stages. The goal is a framework to determine whether, given uncertainties in the climate models and the biological response, meaningful inference can still be made. The non-linear downscaling of climate information to the river scale requires that one realistically account for spatial and temporal variability across scale. Our down- scaling procedure includes the use of fixed/calibrated hydrological flow and temperature models coupled with a stochastically parameterized sturgeon bioenergetics model. We show that, although there is a large amount of uncertainty associated with both the climate model output and the fish growth process, one can establish significant differences in fish growth distributions between models, and between future and current climates for a given model.

  4. Calculating distributed glacier mass balance for the Swiss Alps from RCM output: Development and testing of downscaling and validation methods

    NASA Astrophysics Data System (ADS)

    Machguth, H.; Paul, F.; Kotlarski, S.; Hoelzle, M.

    2009-04-01

    Climate model output has been applied in several studies on glacier mass balance calculation. Hereby, computation of mass balance has mostly been performed at the native resolution of the climate model output or data from individual cells were selected and statistically downscaled. Little attention has been given to the issue of downscaling entire fields of climate model output to a resolution fine enough to compute glacier mass balance in rugged high-mountain terrain. In this study we explore the use of gridded output from a regional climate model (RCM) to drive a distributed mass balance model for the perimeter of the Swiss Alps and the time frame 1979-2003. Our focus lies on the development and testing of downscaling and validation methods. The mass balance model runs at daily steps and 100 m spatial resolution while the RCM REMO provides daily grids (approx. 18 km resolution) of dynamically downscaled re-analysis data. Interpolation techniques and sub-grid parametrizations are combined to bridge the gap in spatial resolution and to obtain daily input fields of air temperature, global radiation and precipitation. The meteorological input fields are compared to measurements at 14 high-elevation weather stations. Computed mass balances are compared to various sets of direct measurements, including stake readings and mass balances for entire glaciers. The validation procedure is performed separately for annual, winter and summer balances. Time series of mass balances for entire glaciers obtained from the model run agree well with observed time series. On the one hand, summer melt measured at stakes on several glaciers is well reproduced by the model, on the other hand, observed accumulation is either over- or underestimated. It is shown that these shifts are systematic and correlated to regional biases in the meteorological input fields. We conclude that the gap in spatial resolution is not a large drawback, while biases in RCM output are a major limitation to model performance. The development and testing of methods to reduce regionally variable biases in entire fields of RCM output should be a focus of pursuing studies.

  5. Downscaling of RCM outputs for representative catchments in the Mediterranean region, for the 1951-2100 time-frame

    NASA Astrophysics Data System (ADS)

    Deidda, Roberto; Marrocu, Marino; Pusceddu, Gabriella; Langousis, Andreas; Mascaro, Giuseppe; Caroletti, Giulio

    2013-04-01

    Within the activities of the EU FP7 CLIMB project (www.climb-fp7.eu), we developed downscaling procedures to reliably assess climate forcing at hydrologically relevant scales, and applied them to six representative hydrological basins located in the Mediterranean region: Riu Mannu and Noce in Italy, Chiba in Tunisia, Kocaeli in Turkey, Thau in France, and Gaza in Palestine. As a first step towards this aim, we used daily precipitation and temperature data from the gridded E-OBS project (www.ecad.eu/dailydata), as reference fields, to rank 14 Regional Climate Model (RCM) outputs from the ENSEMBLES project (http://ensembles-eu.metoffice.com). The four best performing model outputs were selected, with the additional constraint of maintaining 2 outputs obtained from running different RCMs driven by the same GCM, and 2 runs from the same RCM driven by different GCMs. For these four RCM-GCM model combinations, a set of downscaling techniques were developed and applied, for the period 1951-2100, to variables used in hydrological modeling (i.e. precipitation; mean, maximum and minimum daily temperatures; direct solar radiation, relative humidity, magnitude and direction of surface winds). The quality of the final products is discussed, together with the results obtained after applying a bias reduction procedure to daily temperature and precipitation fields.

  6. Observations and Models of Highly Intermittent Phytoplankton Distributions

    PubMed Central

    Mandal, Sandip; Locke, Christopher; Tanaka, Mamoru; Yamazaki, Hidekatsu

    2014-01-01

    The measurement of phytoplankton distributions in ocean ecosystems provides the basis for elucidating the influences of physical processes on plankton dynamics. Technological advances allow for measurement of phytoplankton data to greater resolution, displaying high spatial variability. In conventional mathematical models, the mean value of the measured variable is approximated to compare with the model output, which may misinterpret the reality of planktonic ecosystems, especially at the microscale level. To consider intermittency of variables, in this work, a new modelling approach to the planktonic ecosystem is applied, called the closure approach. Using this approach for a simple nutrient-phytoplankton model, we have shown how consideration of the fluctuating parts of model variables can affect system dynamics. Also, we have found a critical value of variance of overall fluctuating terms below which the conventional non-closure model and the mean value from the closure model exhibit the same result. This analysis gives an idea about the importance of the fluctuating parts of model variables and about when to use the closure approach. Comparisons of plot of mean versus standard deviation of phytoplankton at different depths, obtained using this new approach with real observations, give this approach good conformity. PMID:24787740

  7. Uncertainties propagation and global sensitivity analysis of the frequency response function of piezoelectric energy harvesters

    NASA Astrophysics Data System (ADS)

    Ruiz, Rafael O.; Meruane, Viviana

    2017-06-01

    The goal of this work is to describe a framework to propagate uncertainties in piezoelectric energy harvesters (PEHs). These uncertainties are related to the incomplete knowledge of the model parameters. The framework presented could be employed to conduct prior robust stochastic predictions. The prior analysis assumes a known probability density function for the uncertain variables and propagates the uncertainties to the output voltage. The framework is particularized to evaluate the behavior of the frequency response functions (FRFs) in PEHs, while its implementation is illustrated by the use of different unimorph and bimorph PEHs subjected to different scenarios: free of uncertainties, common uncertainties, and uncertainties as a product of imperfect clamping. The common variability associated with the PEH parameters are tabulated and reported. A global sensitivity analysis is conducted to identify the Sobol indices. Results indicate that the elastic modulus, density, and thickness of the piezoelectric layer are the most relevant parameters of the output variability. The importance of including the model parameter uncertainties in the estimation of the FRFs is revealed. In this sense, the present framework constitutes a powerful tool in the robust design and prediction of PEH performance.

  8. Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models

    USGS Publications Warehouse

    Phillips, D.L.; Marks, D.G.

    1996-01-01

    In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.

  9. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  10. Modeling the water isotopes in Greenland precipitation 1959-2001 with the meso-scale model REMO-iso

    NASA Astrophysics Data System (ADS)

    Sjolte, J.; Hoffmann, G.; Johnsen, S. J.; Vinther, B. M.; Masson-Delmotte, V.; Sturm, C.

    2011-09-01

    Ice core studies have proved the δ18O in Greenland precipitation to be correlated to the phase of the North Atlantic Oscillation (NAO). This subject has also been investigated in modeling studies. However, these studies have either had severe biases in the δ18O levels, or have not been designed to be compared directly with observations. In this study we nudge a meso-scale climate model fitted with stable water isotope diagnostics (REMO-iso) to follow the actual weather patterns for the period 1959-2001. We evaluate this simulation using meteorological observations from stations along the Greenland coast, and δ18O from several Greenland ice core stacks and Global Network In Precipitation (GNIP) data from Greenland, Iceland and Svalbard. The REMO-iso output explains up to 40% of the interannual δ18O variability observed in ice cores, which is comparable to the model performance for precipitation. In terms of reproducing the observed variability the global model, ECHAM4-iso performs on the same level as REMO-iso. However, REMO-iso has smaller biases in δ18O and improved representation of the observed spatial δ18O-temperature slope compared to ECHAM4-iso. Analysis of the main modes of winter variability of δ18O shows a coherent signal in Central and Western Greenland similar to results from ice cores. The NAO explains 20% of the leading δ18O pattern. Based on the model output we suggest that methods to reconstruct the NAO from Greenland ice cores employ both δ18O and accumulation records.

  11. Fuzzy model to estimate the number of hospitalizations for asthma and pneumonia under the effects of air pollution

    PubMed Central

    Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha

    2017-01-01

    ABSTRACT OBJECTIVE Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. METHODS This is a computational model using fuzzy logic based on Mamdani’s inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. RESULTS In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. CONCLUSIONS Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. PMID:28658366

  12. Computer Programs for the AUSEX (Aircraft Undersea Sound Experiment) Air-Water Acoustic Propagation Model.

    DTIC Science & Technology

    1976-01-28

    source-receiver geometry dynamics. For a given time instant, each of the subroutines outputs time variables ( emission time, arrival time...transmission loss, depression/elevation and azimuthal arrival angles, received frequency and range variables (range at emission time, range at arrival time...with the wind equal 24.5 kts. In the double bottom bounce regions, the emission angles (at the virtual surface source) are moderately small (15

  13. Tropical Dynamics Process Studies and Numerical Methods

    DTIC Science & Technology

    2011-06-16

    model. Model input and output arc defined in the Table below. Variable Description Ih Latent heat flux (W/ mA2 ) sh Sensible heat flux (W/ mA2 ) lwo...Net longwave flux (W/ mA2 ) swo Net shortwave flux (W/ mA2 ) 11 Wind speed (m/s) us Atmospheric friction velocity tb Bulk temperature (deg C) dtwo Warm

  14. Pressure model of a four-way spool valve for simulating electrohydraulic control systems

    NASA Technical Reports Server (NTRS)

    Gebben, V. D.

    1976-01-01

    An equation that relates the pressure flow characteristics of hydraulic spool valves was developed. The dependent variable is valve output pressure, and the independent variables are spool position and flow. This causal form of equation is preferred in applications that simulate the effects of hydraulic line dynamics. Results from this equation are compared with those from the conventional valve equation, whose dependent variable is flow. A computer program of the valve equations includes spool stops, leakage spool clearances, and dead-zone characteristics of overlap spools.

  15. SCI model structure determination program (OSR) user's guide. [optimal subset regression

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.

  16. Mathematical modeling and characteristic analysis for over-under turbine based combined cycle engine

    NASA Astrophysics Data System (ADS)

    Ma, Jingxue; Chang, Juntao; Ma, Jicheng; Bao, Wen; Yu, Daren

    2018-07-01

    The turbine based combined cycle engine has become the most promising hypersonic airbreathing propulsion system for its superiority of ground self-starting, wide flight envelop and reusability. The simulation model of the turbine based combined cycle engine plays an important role in the research of performance analysis and control system design. In this paper, a turbine based combined cycle engine mathematical model is built on the Simulink platform, including a dual-channel air intake system, a turbojet engine and a ramjet. It should be noted that the model of the air intake system is built based on computational fluid dynamics calculation, which provides valuable raw data for modeling of the turbine based combined cycle engine. The aerodynamic characteristics of turbine based combined cycle engine in turbojet mode, ramjet mode and mode transition process are studied by the mathematical model, and the influence of dominant variables on performance and safety of the turbine based combined cycle engine is analyzed. According to the stability requirement of thrust output and the safety in the working process of turbine based combined cycle engine, a control law is proposed that could guarantee the steady output of thrust by controlling the control variables of the turbine based combined cycle engine in the whole working process.

  17. Price elasticity reconsidered: Panel estimation of an agricultural water demand function

    NASA Astrophysics Data System (ADS)

    Schoengold, Karina; Sunding, David L.; Moreno, Georgina

    2006-09-01

    Using panel data from a period of water rate reform, this paper estimates the price elasticity of irrigation water demand. Price elasticity is decomposed into the direct effect of water management and the indirect effect of water price on choice of output and irrigation technology. The model is estimated using an instrumental variables strategy to account for the endogeneity of technology and output choices in the water demand equation. Estimation results indicate that the price elasticity of agricultural water demand is -0.79, which is greater than that found in previous studies.

  18. Thermospheric dynamics - A system theory approach

    NASA Technical Reports Server (NTRS)

    Codrescu, M.; Forbes, J. M.; Roble, R. G.

    1990-01-01

    A system theory approach to thermospheric modeling is developed, based upon a linearization method which is capable of preserving nonlinear features of a dynamical system. The method is tested using a large, nonlinear, time-varying system, namely the thermospheric general circulation model (TGCM) of the National Center for Atmospheric Research. In the linearized version an equivalent system, defined for one of the desired TGCM output variables, is characterized by a set of response functions that is constructed from corresponding quasi-steady state and unit sample response functions. The linearized version of the system runs on a personal computer and produces an approximation of the desired TGCM output field height profile at a given geographic location.

  19. The Signature of Southern Hemisphere Atmospheric Circulation Patterns in Antarctic Precipitation

    PubMed Central

    Thompson, David W. J.; van den Broeke, Michiel R.

    2017-01-01

    Abstract We provide the first comprehensive analysis of the relationships between large‐scale patterns of Southern Hemisphere climate variability and the detailed structure of Antarctic precipitation. We examine linkages between the high spatial resolution precipitation from a regional atmospheric model and four patterns of large‐scale Southern Hemisphere climate variability: the southern baroclinic annular mode, the southern annular mode, and the two Pacific‐South American teleconnection patterns. Variations in all four patterns influence the spatial configuration of precipitation over Antarctica, consistent with their signatures in high‐latitude meridional moisture fluxes. They impact not only the mean but also the incidence of extreme precipitation events. Current coupled‐climate models are able to reproduce all four patterns of atmospheric variability but struggle to correctly replicate their regional impacts on Antarctic climate. Thus, linking these patterns directly to Antarctic precipitation variability may allow a better estimate of future changes in precipitation than using model output alone. PMID:29398735

  20. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  1. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study

    PubMed Central

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-01-01

    Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598

  2. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    PubMed

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.

  3. Digital Troposcatter Performance Model: Software Documentation.

    DTIC Science & Technology

    1983-11-28

    Instantanous detection SNR. Output arguments: OUTISI R*4 Conditional outage probabilit.-. IERR 1*2 Error flag. Global variables input from commor IRSN /NUNPAR...meaning onlv when ITOFF = 3. - Possiblv given a new value in the following subprograms: TRANSF IRSN /NUMPAR/ 1*2 NUMPAR.INC Number of values in SNR

  4. The role of environmental variables on the efficiency of water and sewerage companies: a case study of Chile.

    PubMed

    Molinos-Senante, María; Sala-Garrido, Ramón; Lafuente, Matilde

    2015-07-01

    This paper evaluates the efficiency of water and sewerage companies (WaSCs) by introducing the lack of service quality as undesirable outputs. It also investigates whether the production frontier of WaSCs is overall constant returns to scale (CRS) or variable returns to scale (VRS) by using two different data envelopment analysis models. In a second-stage analysis, we study the influence of exogenous and endogenous variables on WaSC performance by applying non-parametric hypothesis tests. In a pioneering approach, the analysis covers 18 WaSCs from Chile, representing about 90% of the Chilean urban population. The results evidence that the technology of the sample studied is characterized overall by CRS. Peak water demand, the percentage of external workers, and the percentage of unbilled water are the factors affecting the efficiency of WaSCs. From a policy perspective, the integration of undesirable outputs into the assessment of WaSC performance is crucial not to penalize companies that provide high service quality to customers.

  5. Control and Analysis for a Self-Excited Induction Generator for Wind Turbine and Electrolyzer Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Muljadi, Eduard; Na, Woonki; Leighty, Bill

    Self-Excited Induction Generation(SEIG) is very rugged, simple, lightweight, and it is easy and inexpensive to implement, very simple to control, and requires a very little maintenance. In this variable-speed operation, the SEIG needs a power electronics interface to convert from the variable frequency output voltage of the generator to a DC output voltage for battery or other DC applications. In our study, a SEIG is connected to the power electronics interface such as diode rectifier and DC/DC converter and then an electrolyzer is connected as a final DC load for fuel cell applications. An equivalent circuit model for an electrolyzermore » is utilized for our application. The control and analysis for the proposed system is carried out by using PSCAD and MATLAB software. This study would be useful for designing and control analysis of power interface circuits for SEIG for a variable speed wind turbine generation with fuel cell applications before the actual implementation.« less

  6. A Web Application For Visualizing Empirical Models of the Space-Atmosphere Interface Region: AtModWeb

    NASA Astrophysics Data System (ADS)

    Knipp, D.; Kilcommons, L. M.; Damas, M. C.

    2015-12-01

    We have created a simple and user-friendly web application to visualize output from empirical atmospheric models that describe the lower atmosphere and the Space-Atmosphere Interface Region (SAIR). The Atmospheric Model Web Explorer (AtModWeb) is a lightweight, multi-user, Python-driven application which uses standard web technology (jQuery, HTML5, CSS3) to give an in-browser interface that can produce plots of modeled quantities such as temperature and individual species and total densities of neutral and ionized upper-atmosphere. Output may be displayed as: 1) a contour plot over a map projection, 2) a pseudo-color plot (heatmap) which allows visualization of a variable as a function of two spatial coordinates, or 3) a simple line plot of one spatial coordinate versus any number of desired model output variables. The application is designed around an abstraction of an empirical atmospheric model, essentially treating the model code as a black box, which makes it simple to add additional models without modifying the main body of the application. Currently implemented are the Naval Research Laboratory NRLMSISE00 model for neutral atmosphere and the International Reference Ionosphere (IRI). These models are relevant to the Low Earth Orbit environment and the SAIR. The interface is simple and usable, allowing users (students and experts) to specify time and location, and choose between historical (i.e. the values for the given date) or manual specification of whichever solar or geomagnetic activity drivers are required by the model. We present a number of use-case examples from research and education: 1) How does atmospheric density between the surface and 1000 km vary with time of day, season and solar cycle?; 2) How do ionospheric layers change with the solar cycle?; 3 How does the composition of the SAIR vary between day and night at a fixed altitude?

  7. VARIABLE TIME-INTERVAL GENERATOR

    DOEpatents

    Gross, J.E.

    1959-10-31

    This patent relates to a pulse generator and more particularly to a time interval generator wherein the time interval between pulses is precisely determined. The variable time generator comprises two oscillators with one having a variable frequency output and the other a fixed frequency output. A frequency divider is connected to the variable oscillator for dividing its frequency by a selected factor and a counter is used for counting the periods of the fixed oscillator occurring during a cycle of the divided frequency of the variable oscillator. This defines the period of the variable oscillator in terms of that of the fixed oscillator. A circuit is provided for selecting as a time interval a predetermined number of periods of the variable oscillator. The output of the generator consists of a first pulse produced by a trigger circuit at the start of the time interval and a second pulse marking the end of the time interval produced by the same trigger circuit.

  8. Exploring the impact of forcing error characteristics on physically based snow simulations within a global sensitivity analysis framework

    NASA Astrophysics Data System (ADS)

    Raleigh, M. S.; Lundquist, J. D.; Clark, M. P.

    2015-07-01

    Physically based models provide insights into key hydrologic processes but are associated with uncertainties due to deficiencies in forcing data, model parameters, and model structure. Forcing uncertainty is enhanced in snow-affected catchments, where weather stations are scarce and prone to measurement errors, and meteorological variables exhibit high variability. Hence, there is limited understanding of how forcing error characteristics affect simulations of cold region hydrology and which error characteristics are most important. Here we employ global sensitivity analysis to explore how (1) different error types (i.e., bias, random errors), (2) different error probability distributions, and (3) different error magnitudes influence physically based simulations of four snow variables (snow water equivalent, ablation rates, snow disappearance, and sublimation). We use the Sobol' global sensitivity analysis, which is typically used for model parameters but adapted here for testing model sensitivity to coexisting errors in all forcings. We quantify the Utah Energy Balance model's sensitivity to forcing errors with 1 840 000 Monte Carlo simulations across four sites and five different scenarios. Model outputs were (1) consistently more sensitive to forcing biases than random errors, (2) generally less sensitive to forcing error distributions, and (3) critically sensitive to different forcings depending on the relative magnitude of errors. For typical error magnitudes found in areas with drifting snow, precipitation bias was the most important factor for snow water equivalent, ablation rates, and snow disappearance timing, but other forcings had a more dominant impact when precipitation uncertainty was due solely to gauge undercatch. Additionally, the relative importance of forcing errors depended on the model output of interest. Sensitivity analysis can reveal which forcing error characteristics matter most for hydrologic modeling.

  9. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  10. Applications of the DOE/NASA wind turbine engineering information system

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1981-01-01

    A statistical analysis of data obtained from the Technology and Engineering Information Systems was made. The systems analyzed consist of the following elements: (1) sensors which measure critical parameters (e.g., wind speed and direction, output power, blade loads and component vibrations); (2) remote multiplexing units (RMUs) on each wind turbine which frequency-modulate, multiplex and transmit sensor outputs; (3) on-site instrumentation to record, process and display the sensor output; and (4) statistical analysis of data. Two examples of the capabilities of these systems are presented. The first illustrates the standardized format for application of statistical analysis to each directly measured parameter. The second shows the use of a model to estimate the variability of the rotor thrust loading, which is a derived parameter.

  11. Robust integral variable structure controller and pulse-width pulse-frequency modulated input shaper design for flexible spacecraft with mismatched uncertainty/disturbance.

    PubMed

    Hu, Qinglei

    2007-10-01

    This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.

  12. A Mathematical Model of a Simple Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeod, Todd C.; Ho, Fat D.

    2009-01-01

    This paper presents a mathematical model characterizing the behavior of a simple amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the amplifier is the basis of many circuit configurations, a mathematical model that describes the behavior of a FeFET-based amplifier will help in the integration of FeFETs into many other circuits.

  13. Effect of pedal rate and power output on rating of perceived exertion during cycle ergometry exercise.

    PubMed

    Hamer, Mark; Boutcher, Yati N; Boutcher, Stephen H

    2005-12-01

    This study examined differentiated rating of perceived exertion (RPE), heart rate, and heart-rate variability during light cycle ergometry exercise at two different pedal rates. 30 healthy men (22.6 +/- 0.9 yr.) were recruited from a student population and completed a continuous 20-min. cycle ergometry exercise protocol, consisting of a 4-min. warm-up (60 rev./min., 30 Watts), followed by four bouts of 4 min. at different combinations of pedal rate (40 or 80 rev./min.) and power output (40 or 80 Watts). The order of the four combinations was counterbalanced across participants. Heart rate was measured using a polar heart-rate monitor, and parasympathetic balance was assessed through time series analysis of heart-rate variability. Measures were compared using a 2 (pedal rate) x 2 (power output) repeated-measures analysis of variance. RPE was significantly greater (p<.05) at 80 versus 40 rev./min. at 40 W. For both power outputs heart rate was significantly increased, and the high frequency component of heart-rate variability was significantly reduced at 80 compared with 40 rev./min. These findings indicate the RPE was greater at higher than at lower pedalling rates for a light absolute power output which contrasts with previous findings based on use of higher power output. Also, pedal rate had a significant effect on heart rate and heart-rate variability at constant power output.

  14. Inverse modeling of geochemical and mechanical compaction in sedimentary basins

    NASA Astrophysics Data System (ADS)

    Colombo, Ivo; Porta, Giovanni Michele; Guadagnini, Alberto

    2015-04-01

    We study key phenomena driving the feedback between sediment compaction processes and fluid flow in stratified sedimentary basins formed through lithification of sand and clay sediments after deposition. Processes we consider are mechanic compaction of the host rock and the geochemical compaction due to quartz cementation in sandstones. Key objectives of our study include (i) the quantification of the influence of the uncertainty of the model input parameters on the model output and (ii) the application of an inverse modeling technique to field scale data. Proper accounting of the feedback between sediment compaction processes and fluid flow in the subsurface is key to quantify a wide set of environmentally and industrially relevant phenomena. These include, e.g., compaction-driven brine and/or saltwater flow at deep locations and its influence on (a) tracer concentrations observed in shallow sediments, (b) build up of fluid overpressure, (c) hydrocarbon generation and migration, (d) subsidence due to groundwater and/or hydrocarbons withdrawal, and (e) formation of ore deposits. Main processes driving the diagenesis of sediments after deposition are mechanical compaction due to overburden and precipitation/dissolution associated with reactive transport. The natural evolution of sedimentary basins is characterized by geological time scales, thus preventing direct and exhaustive measurement of the system dynamical changes. The outputs of compaction models are plagued by uncertainty because of the incomplete knowledge of the models and parameters governing diagenesis. Development of robust methodologies for inverse modeling and parameter estimation under uncertainty is therefore crucial to the quantification of natural compaction phenomena. We employ a numerical methodology based on three building blocks: (i) space-time discretization of the compaction process; (ii) representation of target output variables through a Polynomial Chaos Expansion (PCE); and (iii) model inversion (parameter estimation) within a maximum likelihood framework. In this context, the PCE-based surrogate model enables one to (i) minimize the computational cost associated with the (forward and inverse) modeling procedures leading to uncertainty quantification and parameter estimation, and (ii) compute the full set of Sobol indices quantifying the contribution of each uncertain parameter to the variability of target state variables. Results are illustrated through the simulation of one-dimensional test cases. The analyses focuses on the calibration of model parameters through literature field cases. The quality of parameter estimates is then analyzed as a function of number, type and location of data.

  15. A new dynamical downscaling approach with GCM bias corrections and spectral nudging

    NASA Astrophysics Data System (ADS)

    Xu, Zhongfeng; Yang, Zong-Liang

    2015-04-01

    To improve confidence in regional projections of future climate, a new dynamical downscaling (NDD) approach with both general circulation model (GCM) bias corrections and spectral nudging is developed and assessed over North America. GCM biases are corrected by adjusting GCM climatological means and variances based on reanalysis data before the GCM output is used to drive a regional climate model (RCM). Spectral nudging is also applied to constrain RCM-based biases. Three sets of RCM experiments are integrated over a 31 year period. In the first set of experiments, the model configurations are identical except that the initial and lateral boundary conditions are derived from either the original GCM output, the bias-corrected GCM output, or the reanalysis data. The second set of experiments is the same as the first set except spectral nudging is applied. The third set of experiments includes two sensitivity runs with both GCM bias corrections and nudging where the nudging strength is progressively reduced. All RCM simulations are assessed against North American Regional Reanalysis. The results show that NDD significantly improves the downscaled mean climate and climate variability relative to other GCM-driven RCM downscaling approach in terms of climatological mean air temperature, geopotential height, wind vectors, and surface air temperature variability. In the NDD approach, spectral nudging introduces the effects of GCM bias corrections throughout the RCM domain rather than just limiting them to the initial and lateral boundary conditions, thereby minimizing climate drifts resulting from both the GCM and RCM biases.

  16. On representation of temporal variability in electricity capacity planning models

    DOE PAGES

    Merrick, James H.

    2016-08-23

    This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less

  17. On representation of temporal variability in electricity capacity planning models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Merrick, James H.

    This study systematically investigates how to represent intra-annual temporal variability in models of optimum electricity capacity investment. Inappropriate aggregation of temporal resolution can introduce substantial error into model outputs and associated economic insight. The mechanisms underlying the introduction of this error are shown. How many representative periods are needed to fully capture the variability is then investigated. For a sample dataset, a scenario-robust aggregation of hourly (8760) resolution is possible in the order of 10 representative hours when electricity demand is the only source of variability. The inclusion of wind and solar supply variability increases the resolution of the robustmore » aggregation to the order of 1000. A similar scale of expansion is shown for representative days and weeks. These concepts can be applied to any such temporal dataset, providing, at the least, a benchmark that any other aggregation method can aim to emulate. Finally, how prior information about peak pricing hours can potentially reduce resolution further is also discussed.« less

  18. Use of statistically and dynamically downscaled atmospheric model output for hydrologic simulations in three mountainous basins in the western United States

    USGS Publications Warehouse

    Hay, L.E.; Clark, M.P.

    2003-01-01

    This paper examines the hydrologic model performance in three snowmelt-dominated basins in the western United States to dynamically- and statistically downscaled output from the National Centers for Environmental Prediction/National Center for Atmospheric Research Reanalysis (NCEP). Runoff produced using a distributed hydrologic model is compared using daily precipitation and maximum and minimum temperature timeseries derived from the following sources: (1) NCEP output (horizontal grid spacing of approximately 210 km); (2) dynamically downscaled (DDS) NCEP output using a Regional Climate Model (RegCM2, horizontal grid spacing of approximately 52 km); (3) statistically downscaled (SDS) NCEP output; (4) spatially averaged measured data used to calibrate the hydrologic model (Best-Sta) and (5) spatially averaged measured data derived from stations located within the area of the RegCM2 model output used for each basin, but excluding Best-Sta set (All-Sta). In all three basins the SDS-based simulations of daily runoff were as good as runoff produced using the Best-Sta timeseries. The NCEP, DDS, and All-Sta timeseries were able to capture the gross aspects of the seasonal cycles of precipitation and temperature. However, in all three basins, the NCEP-, DDS-, and All-Sta-based simulations of runoff showed little skill on a daily basis. When the precipitation and temperature biases were corrected in the NCEP, DDS, and All-Sta timeseries, the accuracy of the daily runoff simulations improved dramatically, but, with the exception of the bias-corrected All-Sta data set, these simulations were never as accurate as the SDS-based simulations. This need for a bias correction may be somewhat troubling, but in the case of the large station-timeseries (All-Sta), the bias correction did indeed 'correct' for the change in scale. It is unknown if bias corrections to model output will be valid in a future climate. Future work is warranted to identify the causes for (and removal of) systematic biases in DDS simulations, and improve DDS simulations of daily variability in local climate. Until then, SDS based simulations of runoff appear to be the safer downscaling choice.

  19. Linked population and economic models: some methodological issues in forecasting, analysis, and policy optimization.

    PubMed

    Madden, M; Batey Pwj

    1983-05-01

    Some problems associated with demographic-economic forecasting include finding models appropriate for a declining economy with unemployment, using a multiregional approach in an interregional model, finding a way to show differential consumption while endogenizing unemployment, and avoiding unemployment inconsistencies. The solution to these problems involves the construction of an activity-commodity framework, locating it within a group of forecasting models, and indicating possible ratios towards dynamization of the framework. The authors demonstrate the range of impact multipliers that can be derived from the framework and show how these multipliers relate to Leontief input-output multipliers. It is shown that desired population distribution may be obtained by selecting instruments from the economic sphere to produce, through the constraints vector of an activity-commodity framework, targets selected from demographic activities. The next step in this process, empirical exploitation, was carried out by the authors in the United Kingdom, linking an input-output model with a wide selection of demographic and demographic-economic variables. The generally tenuous control which government has over any variables in systems of this type, especially in market economies, makes application in the policy field of the optimization approach a partly conjectural exercise, although the analytic capacity of the approach can provide clear indications of policy directions.

  20. A Framework for the Analysis of the Reserve Officer Augmentation Process in the United States Marine Corps

    DTIC Science & Technology

    1987-12-01

    occupation group, category (i.e., strength, loss, etc.), years of commissioned service (YCS), grade, occupation, source of commission, education, sex ...OF MCORP OUTPUT OCCUPATION GROUP: All CAT: Strength YCS: 01 - 09 GRADE: All Unrestricted Officers OCCUPATION: All SOURCE: All EDUCATION: All SEX : All...source of commission, sex , MOS, GCT, and other pertinent variables such as the performance index. A Probit or Logit model could be utilized. The variables

  1. Overview of global climate change and carbon sequestration

    Treesearch

    Kurt Johnsen

    2004-01-01

    The potential influence of global climate change on southern forests is uncertain. Outputs of climate change models differ considerably in their projections for precipitation and other variables that affect forests. Forest responses, particularly effects on competition among species, are difficult to assess. Even the responses of relatively simple ecosystems, such as...

  2. Exchange Rates and Fundamentals.

    ERIC Educational Resources Information Center

    Engel, Charles; West, Kenneth D.

    2005-01-01

    We show analytically that in a rational expectations present-value model, an asset price manifests near-random walk behavior if fundamentals are I (1) and the factor for discounting future fundamentals is near one. We argue that this result helps explain the well-known puzzle that fundamental variables such as relative money supplies, outputs,…

  3. Estimation of Hidden State Variables of the Intracranial System Using Constrained Nonlinear Kalman Filters

    PubMed Central

    Nenov, Valeriy; Bergsneider, Marvin; Glenn, Thomas C.; Vespa, Paul; Martin, Neil

    2007-01-01

    Impeded by the rigid skull, assessment of physiological variables of the intracranial system is difficult. A hidden state estimation approach is used in the present work to facilitate the estimation of unobserved variables from available clinical measurements including intracranial pressure (ICP) and cerebral blood flow velocity (CBFV). The estimation algorithm is based on a modified nonlinear intracranial mathematical model, whose parameters are first identified in an offline stage using a nonlinear optimization paradigm. Following the offline stage, an online filtering process is performed using a nonlinear Kalman filter (KF)-like state estimator that is equipped with a new way of deriving the Kalman gain satisfying the physiological constraints on the state variables. The proposed method is then validated by comparing different state estimation methods and input/output (I/O) configurations using simulated data. It is also applied to a set of CBFV, ICP and arterial blood pressure (ABP) signal segments from brain injury patients. The results indicated that the proposed constrained nonlinear KF achieved the best performance among the evaluated state estimators and that the state estimator combined with the I/O configuration that has ICP as the measured output can potentially be used to estimate CBFV continuously. Finally, the state estimator combined with the I/O configuration that has both ICP and CBFV as outputs can potentially estimate the lumped cerebral arterial radii, which are not measurable in a typical clinical environment. PMID:17281533

  4. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  5. Toward a Geoscientific Semantic Web Based on How Geoscientists Talk Across Disciplines

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2015-12-01

    Are there terms and scientific concepts from math and science that almost all geoscientists understand? Is there a limited set of terms, patterns and language elements that geoscientists use for efficient, unambiguous communication that could be used to describe the variables that they measure, store in data sets and use as model inputs and outputs? In this talk it will be argued that the answer to both questions is "yes" by drawing attention to many such patterns and then showing how they have been used to create a rich set of naming conventions for variables called the CSDMS Standard Names. Variables, which store numerical quantities associated with specific objects, are the fundamental currency of science. They are the items that are measured and saved in data sets, which may then be read into models. They are the inputs and outputs of models and the items exchanged between coupled models. They also star in the equations that summarize our scientific knowledge. Carefully constructed, unambiguous and unique labels for commonly used variables therefore provide an attractive mechanism for automatic semantic mediation when variables are to be shared between heterogeous resources. They provide a means to automatically check for semantic equivalence so that variables can be safely shared in resource compositions. A good set of standardized variable names can serve as the hub in a hub-and-spoke solution to semantic mediation, where the "internal vocabularies" of geoscience resources (i.e. data sets and models) are mapped to and from the hub to facilitate interoperability and data sharing. When built from patterns and terms that most geoscientists are already familiar with, these standardized variable names are then "readable" by both humans and machines. Despite the importance of variables in scientific work, most of the ontological work in the geosciences is focused at a higher level that supports finding resources (e.g data sets) but not on describing the contents of those resources. The CSDMS Standard Names have matured continuously since they were first introduced over three years ago. Many recent extensions and applications of them (e.g. different science domains, different projects, new rules, ontological work) as well as their compatibility with the International System of Quantities (ISO 80000) will be discussed.

  6. A variable-gain output feedback control design methodology

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim; Moerder, Daniel D.; Broussard, John R.; Taylor, Deborah B.

    1989-01-01

    A digital control system design technique is developed in which the control system gain matrix varies with the plant operating point parameters. The design technique is obtained by formulating the problem as an optimal stochastic output feedback control law with variable gains. This approach provides a control theory framework within which the operating range of a control law can be significantly extended. Furthermore, the approach avoids the major shortcomings of the conventional gain-scheduling techniques. The optimal variable gain output feedback control problem is solved by embedding the Multi-Configuration Control (MCC) problem, previously solved at ICS. An algorithm to compute the optimal variable gain output feedback control gain matrices is developed. The algorithm is a modified version of the MCC algorithm improved so as to handle the large dimensionality which arises particularly in variable-gain control problems. The design methodology developed is applied to a reconfigurable aircraft control problem. A variable-gain output feedback control problem was formulated to design a flight control law for an AFTI F-16 aircraft which can automatically reconfigure its control strategy to accommodate failures in the horizontal tail control surface. Simulations of the closed-loop reconfigurable system show that the approach produces a control design which can accommodate such failures with relative ease. The technique can be applied to many other problems including sensor failure accommodation, mode switching control laws and super agility.

  7. The relative efficiency of Iranian's rural traffic police: a three-stage DEA model.

    PubMed

    Rahimi, Habibollah; Soori, Hamid; Nazari, Seyed Saeed Hashemi; Motevalian, Seyed Abbas; Azar, Adel; Momeni, Eskandar; Javartani, Mehdi

    2017-10-13

    Road traffic Injuries (RTIs) as a health problem imposes governments to implement different interventions. Target achievement in this issue required effective and efficient measures. Efficiency evaluation of traffic police as one of the responsible administrators is necessary for resource management. Therefore, this study conducted to measure Iran's rural traffic police efficiency. This was an ecological study. To obtain pure efficiency score, three-stage DEA model was conducted with seven inputs and three output variables. At the first stage, crude efficiency score was measured with BCC-O model. Next, to extract the effects of socioeconomic, demographic, traffic count and road infrastructure as the environmental variables and statistical noise, the Stochastic Frontier Analysis (SFA) model was applied and the output values were modified according to similar environment and statistical noise conditions. Then, the pure efficiency score was measured using modified outputs and BCC-O model. In total, the efficiency score of 198 police stations from 24 provinces of 31 provinces were measured. The annual means (standard deviation) of damage, injury and fatal accidents were 247.7 (258.4), 184.9 (176.9), and 28.7 (19.5), respectively. Input averages were 5.9 (3.0) patrol teams, 0.5% (0.2) manpower proportions, 7.5 (2.9) patrol cars, 0.5 (1.3) motorcycles, 77,279.1 (46,794.7) penalties, 90.9 (2.8) cultural and educational activity score, 0.7 (2.4) speed cameras. The SFA model showed non-significant differences between police station performances and the most differences attributed to the environmental and random error. One-way main road, by road, traffic count and the number of household owning motorcycle had significant positive relations with inefficiency score. The length of freeway/highway and literacy rate variables had negative relations, significantly. Pure efficiency score was with mean of 0.95 and SD of 0.09. Iran's traffic police has potential opportunity to reduce RTIs. Adjusting police performance with environmental conditions is necessary. Capability of DEA method in setting quantitative targets for every station induces motivation for managers to reduce RTIs. Repetition of this study is recommended, annually.

  8. The NASA MSFC Earth Global Reference Atmospheric Model-2007 Version

    NASA Technical Reports Server (NTRS)

    Leslie, F.W.; Justus, C.G.

    2008-01-01

    Reference or standard atmospheric models have long been used for design and mission planning of various aerospace systems. The NASA/Marshall Space Flight Center (MSFC) Global Reference Atmospheric Model (GRAM) was developed in response to the need for a design reference atmosphere that provides complete global geographical variability, and complete altitude coverage (surface to orbital altitudes) as well as complete seasonal and monthly variability of the thermodynamic variables and wind components. A unique feature of GRAM is that, addition to providing the geographical, height, and monthly variation of the mean atmospheric state, it includes the ability to simulate spatial and temporal perturbations in these atmospheric parameters (e.g. fluctuations due to turbulence and other atmospheric perturbation phenomena). A summary comparing GRAM features to characteristics and features of other reference or standard atmospheric models, can be found Guide to Reference and Standard Atmosphere Models. The original GRAM has undergone a series of improvements over the years with recent additions and changes. The software program is called Earth-GRAM2007 to distinguish it from similar programs for other bodies (e.g. Mars, Venus, Neptune, and Titan). However, in order to make this Technical Memorandum (TM) more readable, the software will be referred to simply as GRAM07 or GRAM unless additional clarity is needed. Section 1 provides an overview of the basic features of GRAM07 including the newly added features. Section 2 provides a more detailed description of GRAM07 and how the model output generated. Section 3 presents sample results. Appendices A and B describe the Global Upper Air Climatic Atlas (GUACA) data and the Global Gridded Air Statistics (GGUAS) database. Appendix C provides instructions for compiling and running GRAM07. Appendix D gives a description of the required NAMELIST format input. Appendix E gives sample output. Appendix F provides a list of available parameters to enable the user to generate special output. Appendix G gives an example and guidance on incorporating GRAM07 as a subroutine in other programs such as trajectory codes or orbital propagation routines.

  9. Evaluate and Analysis Efficiency of Safaga Port Using DEA-CCR, BCC and SBM Models-Comparison with DP World Sokhna

    NASA Astrophysics Data System (ADS)

    Elsayed, Ayman; Shabaan Khalil, Nabil

    2017-10-01

    The competition among maritime ports is increasing continuously; the main purpose of Safaga port is to become the best option for companies to carry out their trading activities, particularly importing and exporting The main objective of this research is to evaluate and analyze factors that may significantly affect the levels of Safaga port efficiency in Egypt (particularly the infrastructural capacity). The assessment of such efficiency is a task that must play an important role in the management of Safaga port in order to improve the possibility of development and success in commercial activities. Drawing on Data Envelopment Analysis(DEA)models, this paper develops a manner of assessing the comparative efficiency of Safaga port in Egypt during the study period 2004-2013. Previous research for port efficiencies measurement usually using radial DEA models (DEA-CCR), (DEA-BCC), but not using non radial DEA model. The research applying radial - output oriented (DEA-CCR), (DEA-BCC) and non-radial (DEA-SBM) model with ten inputs and four outputs. The results were obtained from the analysis input and output variables based on DEA-CCR, DEA-BCC and SBM models, by software Max DEA Pro 6.3. DP World Sokhna port higher efficiency for all outputs were compared to Safaga port. DP World Sokhna position is below the southern entrance to the Suez Canal, on the Red Sea, Egypt, makes it strategically located to handle cargo transiting through one of the world's busiest commercial waterways.

  10. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  11. Statistical downscaling of mean temperature, maximum temperature, and minimum temperature on the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Jiang, L.

    2017-12-01

    Climate change is considered to be one of the greatest environmental threats. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the Statistical Downscaling Model (SDSM) in downscaling the outputs from Beijing Normal University Earth System Model (BNU-ESM). The study focus on the the Loess Plateau, China, and the variables for downscaling include daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN). The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX; 37.6%, 31.8%, and 23.2% for TMIN.

  12. The reservoir model: a differential equation model of psychological regulation.

    PubMed

    Deboeck, Pascal R; Bergeman, C S

    2013-06-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might "add up" over time (e.g., life stressors, inputs), but individuals simultaneously take action to "blow off steam" (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the "height" (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  13. The Reservoir Model: A Differential Equation Model of Psychological Regulation

    PubMed Central

    Deboeck, Pascal R.; Bergeman, C. S.

    2017-01-01

    Differential equation models can be used to describe the relationships between the current state of a system of constructs (e.g., stress) and how those constructs are changing (e.g., based on variable-like experiences). The following article describes a differential equation model based on the concept of a reservoir. With a physical reservoir, such as one for water, the level of the liquid in the reservoir at any time depends on the contributions to the reservoir (inputs) and the amount of liquid removed from the reservoir (outputs). This reservoir model might be useful for constructs such as stress, where events might “add up” over time (e.g., life stressors, inputs), but individuals simultaneously take action to “blow off steam” (e.g., engage coping resources, outputs). The reservoir model can provide descriptive statistics of the inputs that contribute to the “height” (level) of a construct and a parameter that describes a person's ability to dissipate the construct. After discussing the model, we describe a method of fitting the model as a structural equation model using latent differential equation modeling and latent distribution modeling. A simulation study is presented to examine recovery of the input distribution and output parameter. The model is then applied to the daily self-reports of negative affect and stress from a sample of older adults from the Notre Dame Longitudinal Study on Aging. PMID:23527605

  14. The equal combination synchronization of a class of chaotic systems with discontinuous output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luo, Runzi; Zeng, Yanhui

    This paper investigates the equal combination synchronization of a class of chaotic systems. The chaotic systems are assumed that only the output state variable is available and the output may be discontinuous state variable. By constructing proper observers, some novel criteria for the equal combination synchronization are proposed. The Lorenz chaotic system is taken as an example to demonstrate the efficiency of the proposed approach.

  15. "One-Stop Shopping" for Ocean Remote-Sensing and Model Data

    NASA Technical Reports Server (NTRS)

    Li, P. Peggy; Vu, Quoc; Chao, Yi; Li, Zhi-Jin; Choi, Jei-Kook

    2006-01-01

    OurOcean Portal 2.0 (http:// ourocean.jpl.nasa.gov) is a software system designed to enable users to easily gain access to ocean observation data, both remote-sensing and in-situ, configure and run an Ocean Model with observation data assimilated on a remote computer, and visualize both the observation data and the model outputs. At present, the observation data and models focus on the California coastal regions and Prince William Sound in Alaska. This system can be used to perform both real-time and retrospective analyses of remote-sensing data and model outputs. OurOcean Portal 2.0 incorporates state-of-the-art information technologies (IT) such as MySQL database, Java Web Server (Apache/Tomcat), Live Access Server (LAS), interactive graphics with Java Applet at the Client site and MatLab/GMT at the server site, and distributed computing. OurOcean currently serves over 20 real-time or historical ocean data products. The data are served in pre-generated plots or their native data format. For some of the datasets, users can choose different plotting parameters and produce customized graphics. OurOcean also serves 3D Ocean Model outputs generated by ROMS (Regional Ocean Model System) using LAS. The Live Access Server (LAS) software, developed by the Pacific Marine Environmental Laboratory (PMEL) of the National Oceanic and Atmospheric Administration (NOAA), is a configurable Web-server program designed to provide flexible access to geo-referenced scientific data. The model output can be views as plots in horizontal slices, depth profiles or time sequences, or can be downloaded as raw data in different data formats, such as NetCDF, ASCII, Binary, etc. The interactive visualization is provided by graphic software, Ferret, also developed by PMEL. In addition, OurOcean allows users with minimal computing resources to configure and run an Ocean Model with data assimilation on a remote computer. Users may select the forcing input, the data to be assimilated, the simulation period, and the output variables and submit the model to run on a backend parallel computer. When the run is complete, the output will be added to the LAS server for

  16. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    USGS Publications Warehouse

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  17. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less

  18. The Impact of Parametric Uncertainties on Biogeochemistry in the E3SM Land Model

    DOE PAGES

    Ricciuto, Daniel; Sargsyan, Khachik; Thornton, Peter

    2018-02-27

    We conduct a global sensitivity analysis (GSA) of the Energy Exascale Earth System Model (E3SM), land model (ELM) to calculate the sensitivity of five key carbon cycle outputs to 68 model parameters. This GSA is conducted by first constructing a Polynomial Chaos (PC) surrogate via new Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth leading to a sparse, high-dimensional PC surrogate with 3,000 model evaluations. The PC surrogate allows efficient extraction of GSA information leading to further dimensionality reduction. The GSA is performed at 96 FLUXNET sites covering multiple plant functional types (PFTs) and climate conditions. Aboutmore » 20 of the model parameters are identified as sensitive with the rest being relatively insensitive across all outputs and PFTs. These sensitivities are dependent on PFT, and are relatively consistent among sites within the same PFT. The five model outputs have a majority of their highly sensitive parameters in common. A common subset of sensitive parameters is also shared among PFTs, but some parameters are specific to certain types (e.g., deciduous phenology). In conclusion, the relative importance of these parameters shifts significantly among PFTs and with climatic variables such as mean annual temperature.« less

  19. Development and Validation of a Weather-Based Model for Predicting Infection of Loquat Fruit by Fusicladium eriobotryae

    PubMed Central

    González-Domínguez, Elisa; Armengol, Josep; Rossi, Vittorio

    2014-01-01

    A mechanistic, dynamic model was developed to predict infection of loquat fruit by conidia of Fusicladium eriobotryae, the causal agent of loquat scab. The model simulates scab infection periods and their severity through the sub-processes of spore dispersal, infection, and latency (i.e., the state variables); change from one state to the following one depends on environmental conditions and on processes described by mathematical equations. Equations were developed using published data on F. eriobotryae mycelium growth, conidial germination, infection, and conidial dispersion pattern. The model was then validated by comparing model output with three independent data sets. The model accurately predicts the occurrence and severity of infection periods as well as the progress of loquat scab incidence on fruit (with concordance correlation coefficients >0.95). Model output agreed with expert assessment of the disease severity in seven loquat-growing seasons. Use of the model for scheduling fungicide applications in loquat orchards may help optimise scab management and reduce fungicide applications. PMID:25233340

  20. Quantifying uncertainty in high-resolution coupled hydrodynamic-ecosystem models

    NASA Astrophysics Data System (ADS)

    Allen, J. I.; Somerfield, P. J.; Gilbert, F. J.

    2007-01-01

    Marine ecosystem models are becoming increasingly complex and sophisticated, and are being used to estimate the effects of future changes in the earth system with a view to informing important policy decisions. Despite their potential importance, far too little attention has been, and is generally, paid to model errors and the extent to which model outputs actually relate to real-world processes. With the increasing complexity of the models themselves comes an increasing complexity among model results. If we are to develop useful modelling tools for the marine environment we need to be able to understand and quantify the uncertainties inherent in the simulations. Analysing errors within highly multivariate model outputs, and relating them to even more complex and multivariate observational data, are not trivial tasks. Here we describe the application of a series of techniques, including a 2-stage self-organising map (SOM), non-parametric multivariate analysis, and error statistics, to a complex spatio-temporal model run for the period 1988-1989 in the Southern North Sea, coinciding with the North Sea Project which collected a wealth of observational data. We use model output, large spatio-temporally resolved data sets and a combination of methodologies (SOM, MDS, uncertainty metrics) to simplify the problem and to provide tractable information on model performance. The use of a SOM as a clustering tool allows us to simplify the dimensions of the problem while the use of MDS on independent data grouped according to the SOM classification allows us to validate the SOM. The combination of classification and uncertainty metrics allows us to pinpoint the variables and associated processes which require attention in each region. We recommend the use of this combination of techniques for simplifying complex comparisons of model outputs with real data, and analysis of error distributions.

  1. The Extrapolar SWIFT model (version 1.0): fast stratospheric ozone chemistry for global climate models

    NASA Astrophysics Data System (ADS)

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2018-03-01

    The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the systematic error is small and does not accumulate during the course of a simulation. In the context of a 10-year simulation, the ozone layer simulated by SWIFT shows a stable annual cycle, with inter-annual variations comparable to the ATLAS model. The application of Extrapolar SWIFT requires the evaluation of polynomial functions with 30-100 terms. Computers can currently calculate such polynomial functions at thousands of model grid points in seconds. SWIFT provides the desired numerical efficiency and computes the ozone layer 104 times faster than the chemistry scheme in the ATLAS CTM.

  2. A MacCormack-TVD finite difference method to simulate the mass flow in mountainous terrain with variable computational domain

    NASA Astrophysics Data System (ADS)

    Ouyang, Chaojun; He, Siming; Xu, Qiang; Luo, Yu; Zhang, Wencheng

    2013-03-01

    A two-dimensional mountainous mass flow dynamic procedure solver (Massflow-2D) using the MacCormack-TVD finite difference scheme is proposed. The solver is implemented in Matlab on structured meshes with variable computational domain. To verify the model, a variety of numerical test scenarios, namely, the classical one-dimensional and two-dimensional dam break, the landslide in Hong Kong in 1993 and the Nora debris flow in the Italian Alps in 2000, are executed, and the model outputs are compared with published results. It is established that the model predictions agree well with both the analytical solution as well as the field observations.

  3. Adaptive Control via Neural Output Feedback for a Class of Nonlinear Discrete-Time Systems in a Nested Interconnected Form.

    PubMed

    Li, Dong-Juan; Li, Da-Peng

    2017-09-14

    In this paper, an adaptive output feedback control is framed for uncertain nonlinear discrete-time systems. The considered systems are a class of multi-input multioutput nonaffine nonlinear systems, and they are in the nested lower triangular form. Furthermore, the unknown dead-zone inputs are nonlinearly embedded into the systems. These properties of the systems will make it very difficult and challenging to construct a stable controller. By introducing a new diffeomorphism coordinate transformation, the controlled system is first transformed into a state-output model. By introducing a group of new variables, an input-output model is finally obtained. Based on the transformed model, the implicit function theorem is used to determine the existence of the ideal controllers and the approximators are employed to approximate the ideal controllers. By using the mean value theorem, the nonaffine functions of systems can become an affine structure but nonaffine terms still exist. The adaptation auxiliary terms are skillfully designed to cancel the effect of the dead-zone input. Based on the Lyapunov difference theorem, the boundedness of all the signals in the closed-loop system can be ensured and the tracking errors are kept in a bounded compact set. The effectiveness of the proposed technique is checked by a simulation study.

  4. Improving short-term forecasting during ramp events by means of Regime-Switching Artificial Neural Networks

    NASA Astrophysics Data System (ADS)

    Gallego, C.; Costa, A.; Cuerva, A.

    2010-09-01

    Since nowadays wind energy can't be neither scheduled nor large-scale storaged, wind power forecasting has been useful to minimize the impact of wind fluctuations. In particular, short-term forecasting (characterised by prediction horizons from minutes to a few days) is currently required by energy producers (in a daily electricity market context) and the TSO's (in order to keep the stability/balance of an electrical system). Within the short-term background, time-series based models (i.e., statistical models) have shown a better performance than NWP models for horizons up to few hours. These models try to learn and replicate the dynamic shown by the time series of a certain variable. When considering the power output of wind farms, ramp events are usually observed, being characterized by a large positive gradient in the time series (ramp-up) or negative (ramp-down) during relatively short time periods (few hours). Ramp events may be motivated by many different causes, involving generally several spatial scales, since the large scale (fronts, low pressure systems) up to the local scale (wind turbine shut-down due to high wind speed, yaw misalignment due to fast changes of wind direction). Hence, the output power may show unexpected dynamics during ramp events depending on the underlying processes; consequently, traditional statistical models considering only one dynamic for the hole power time series may be inappropriate. This work proposes a Regime Switching (RS) model based on Artificial Neural Nets (ANN). The RS-ANN model gathers as many ANN's as different dynamics considered (called regimes); a certain ANN is selected so as to predict the output power, depending on the current regime. The current regime is on-line updated based on a gradient criteria, regarding the past two values of the output power. 3 Regimes are established, concerning ramp events: ramp-up, ramp-down and no-ramp regime. In order to assess the skillness of the proposed RS-ANN model, a single-ANN model (without regime classification) is adopted as a reference model. Both models are evaluated in terms of Improvement over Persistence on the Mean Square Error basis (IoP%) when predicting horizons form 1 time-step to 5. The case of a wind farm located in the complex terrain of Alaiz (north of Spain) has been considered. Three years of available power output data with a hourly resolution have been employed: two years for training and validation of the model and the last year for assessing the accuracy. Results showed that the RS-ANN overcame the single-ANN model for one step-ahead forecasts: the overall IoP% was up to 8.66% for the RS-ANN model (depending on the gradient criterion selected to consider the ramp regime triggered) and 6.16% for the single-ANN. However, both models showed similar accuracy for larger horizons. A locally-weighted evaluation during ramp events for one-step ahead was also performed. It was found that the IoP% during ramps-up increased from 17.60% (case of single-ANN) to 22.25% (case of RS-ANN); however, during the ramps-down events this improvement increased from 18.55% to 19.55%. Three main conclusions are derived from this case study: It highlights the importance of considering statistical models capable of differentiate several regimes showed by the output power time series in order to improve the forecasting during extreme events like ramps. On-line regime classification based on available power output data didn't seem to contribute to improve forecasts for horizons beyond one-step ahead. Tacking into account other explanatory variables (local wind measurements, NWP outputs) could lead to a better understanding of ramp events, improving the regime assessment also for further horizons. The RS-ANN model slightly overcame the single-ANN during ramp-down events. If further research reinforce this effect, special attention should be addressed to understand the underlying processes during ramp-down events.

  5. Numerical modeling of rapidly varying flows using HEC-RAS and WSPG models.

    PubMed

    Rao, Prasada; Hromadka, Theodore V

    2016-01-01

    The performance of two popular hydraulic models (HEC-RAS and WSPG) for modeling hydraulic jump in an open channel is investigated. The numerical solutions are compared with a new experimental data set obtained for varying channel bottom slopes and flow rates. Both the models satisfactorily predict the flow depths and location of the jump. The end results indicate that the numerical models output is sensitive to the value of chosen roughness coefficient. For this application, WSPG model is easier to implement with few input variables.

  6. Prediction of Layer Thickness in Molten Borax Bath with Genetic Evolutionary Programming

    NASA Astrophysics Data System (ADS)

    Taylan, Fatih

    2011-04-01

    In this study, the vanadium carbide coating in molten borax bath process is modeled by evolutionary genetic programming (GEP) with bath composition (borax percentage, ferro vanadium (Fe-V) percentage, boric acid percentage), bath temperature, immersion time, and layer thickness data. Five inputs and one output data exist in the model. The percentage of borax, Fe-V, and boric acid, temperature, and immersion time parameters are used as input data and the layer thickness value is used as output data. For selected bath components, immersion time, and temperature variables, the layer thicknesses are derived from the mathematical expression. The results of the mathematical expressions are compared to that of experimental data; it is determined that the derived mathematical expression has an accuracy of 89%.

  7. Camera traps can be heard and seen by animals.

    PubMed

    Meek, Paul D; Ballard, Guy-Anthony; Fleming, Peter J S; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals' hearing and produce illumination that can be seen by many species.

  8. Modeling a Common-Source Amplifier Using a Ferroelectric Transistor

    NASA Technical Reports Server (NTRS)

    Sayyah, Rana; Hunt, Mitchell; MacLeond, Todd C.; Ho, Fat D.

    2010-01-01

    This paper presents a mathematical model characterizing the behavior of a common-source amplifier using a FeFET. The model is based on empirical data and incorporates several variables that affect the output, including frequency, load resistance, and gate-to-source voltage. Since the common-source amplifier is the most widely used amplifier in MOS technology, understanding and modeling the behavior of the FeFET-based common-source amplifier will help in the integration of FeFETs into many circuits.

  9. Theoretical modeling and experimental analysis of solar still integrated with evacuated tubes

    NASA Astrophysics Data System (ADS)

    Panchal, Hitesh; Awasthi, Anuradha

    2017-06-01

    In this present research work, theoretical modeling of single slope, single basin solar still integrated with evacuated tubes has been performed based on energy balance equations. Major variables like water temperature, inner glass cover temperature and distillate output has been computed based on theoretical modeling. The experimental setup has been made from locally available materials and installed at Gujarat Power Engineering and Research Institute, Mehsana, Gujarat, India (23.5880°N, 72.3693°E) with 0.04 m depth during 6 months of time interval. From the series of experiments, it is found considerable increment in average distillate output of a solar still when integrated with evacuated tubes not only during daytime but also from night time. In all experimental cases, the correlation of coefficient (r) and root mean square percentage deviation of theoretical modeling and experimental study found good agreement with 0.97 < r < 0.98 and 10.22 < e < 38.4% respectively.

  10. A quantum causal discovery algorithm

    NASA Astrophysics Data System (ADS)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  11. Analysis performance of proton exchange membrane fuel cell (PEMFC)

    NASA Astrophysics Data System (ADS)

    Mubin, A. N. A.; Bahrom, M. H.; Azri, M.; Ibrahim, Z.; Rahim, N. A.; Raihan, S. R. S.

    2017-06-01

    Recently, the proton exchange membrane fuel cell (PEMFC) has gained much attention to the technology of renewable energy due to its mechanically ideal and zero emission power source. PEMFC performance reflects from the surroundings such as temperature and pressure. This paper presents an analysis of the performance of the PEMFC by developing the mathematical thermodynamic modelling using Matlab/Simulink. Apart from that, the differential equation of the thermodynamic model of the PEMFC is used to explain the contribution of heat to the performance of the output voltage of the PEMFC. On the other hand, the partial pressure equation of the hydrogen is included in the PEMFC mathematical modeling to study the PEMFC voltage behaviour related to the input variable input hydrogen pressure. The efficiency of the model is 33.8% which calculated by applying the energy conversion device equations on the thermal efficiency. PEMFC’s voltage output performance is increased by increasing the hydrogen input pressure and temperature.

  12. Climate variability, rice production and groundwater depletion in India

    NASA Astrophysics Data System (ADS)

    Bhargava, Alok

    2018-03-01

    This paper modeled the proximate determinants of rice outputs and groundwater depths in 27 Indian states during 1980-2010. Dynamic random effects models were estimated by maximum likelihood at state and well levels. The main findings from models for rice outputs were that temperatures and rainfall levels were significant predictors, and the relationships were quadratic with respect to rainfall. Moreover, nonlinearities with respect to population changes indicated greater rice production with population increases. Second, groundwater depths were positively associated with temperatures and negatively with rainfall levels and there were nonlinear effects of population changes. Third, dynamic models for in situ groundwater depths in 11 795 wells in mainly unconfined aquifers, accounting for latitudes, longitudes and altitudes, showed steady depletion. Overall, the results indicated that population pressures on food production and environment need to be tackled via long-term healthcare, agricultural, and groundwater recharge policies in India.

  13. Neural network models - a novel tool for predicting the efficacy of growth hormone (GH) therapy in children with short stature.

    PubMed

    Smyczynska, Joanna; Hilczer, Maciej; Smyczynska, Urszula; Stawerska, Renata; Tadeusiewicz, Ryszard; Lewinski, Andrzej

    2015-01-01

    The leading method for prediction of growth hormone (GH) therapy effectiveness are multiple linear regression (MLR) models. Best of our knowledge, we are the first to apply artificial neural networks (ANN) to solve this problem. For ANN there is no necessity to assume the functions linking independent and dependent variables. The aim of study is to compare ANN and MLR models of GH therapy effectiveness. Analysis comprised the data of 245 GH-deficient children (170 boys) treated with GH up to final height (FH). Independent variables included: patients' height, pre-treatment height velocity, chronological age, bone age, gender, pubertal status, parental heights, GH peak in 2 stimulation tests, IGF-I concentration. The output variable was FH. For testing dataset, MLR model predicted FH SDS with average error (RMSE) 0.64 SD, explaining 34.3% of its variability; ANN model derived on the same pre-processed data predicted FH SDS with RMSE 0.60 SD, explaining 42.0% of its variability; ANN model derived on raw data predicted FH with RMSE 3.9 cm (0.63 SD), explaining 78.7% of its variability. ANN seem to be valuable tool in prediction of GH treatment effectiveness, especially since they can be applied to raw clinical data.

  14. Economy of scale: a motion sensor with variable speed tuning.

    PubMed

    Perrone, John A

    2005-01-26

    We have previously presented a model of how neurons in the primate middle temporal (MT/V5) area can develop selectivity for image speed by using common properties of the V1 neurons that precede them in the visual motion pathway (J. A. Perrone & A. Thiele, 2002). The motion sensor developed in this model is based on two broad classes of V1 complex neurons (sustained and transient). The S-type neuron has low-pass temporal frequency tuning, p(omega), and the T-type has band-pass temporal frequency tuning, m(omega). The outputs from the S and T neurons are combined in a special way (weighted intersection mechanism [WIM]) to generate a sensor tuned to a particular speed, v. Here I go on to show that if the S and T temporal frequency tuning functions have a particular form (i.e., p(omega)/(m(omega) = k/omega), then a motion sensor with variable speed tuning can be generated from just two V1 neurons. A simple scaling of the S- or T-type neuron output before it is incorporated into the WIM model produces a motion sensor that can be tuned to a wide continuous range of optimal speeds.

  15. The use of a high resolution model in a private environment.

    NASA Astrophysics Data System (ADS)

    van Dijke, D.; Malda, D.

    2009-09-01

    The commercial organisation MeteoGroup uses high resolution modelling for multiple purposes. MeteoGroup uses the Weather Research and Forecasting Model (WRF®1). WRF is used in the operational environment of several MeteoGroup companies across Europe. It is also used in hindcast studies, for example hurricane tracking, wind climate computation and deriving boundary conditions for air quality models. A special operational service was set up for our tornado chasing team that uses high resolution flexible WRF data to chase for super cells and tornados in the USA during spring. Much effort is put into the development and improvement of the pre- and post-processing of the model. At MeteoGroup the static land-use data has been extended and adjusted to improve temperature and wind forecasts. The system has been modified such that sigma level input data from the global ECMWF model can be used for initialisation. By default only pressure level data could be used. During the spin-up of the model synoptical observations are nudged. A program to adjust possible initialisation errors of several surface parameters in coastal areas has been implemented. We developed an algorithm that computes cloud fractions using multiple direct model output variables. Forecasters prefer to use weather codes for their daily forecasts to detect severe weather. For this usage we developed model weather codes using a variety of direct model output and our own derived variables. 1 WRF® is a registered trademark of the University Corporation for Atmospheric Research (UCAR)

  16. Multiple-output support vector machine regression with feature selection for arousal/valence space emotion assessment.

    PubMed

    Torres-Valencia, Cristian A; Álvarez, Mauricio A; Orozco-Gutiérrez, Alvaro A

    2014-01-01

    Human emotion recognition (HER) allows the assessment of an affective state of a subject. Until recently, such emotional states were described in terms of discrete emotions, like happiness or contempt. In order to cover a high range of emotions, researchers in the field have introduced different dimensional spaces for emotion description that allow the characterization of affective states in terms of several variables or dimensions that measure distinct aspects of the emotion. One of the most common of such dimensional spaces is the bidimensional Arousal/Valence space. To the best of our knowledge, all HER systems so far have modelled independently, the dimensions in these dimensional spaces. In this paper, we study the effect of modelling the output dimensions simultaneously and show experimentally the advantages in modeling them in this way. We consider a multimodal approach by including features from the Electroencephalogram and a few physiological signals. For modelling the multiple outputs, we employ a multiple output regressor based on support vector machines. We also include an stage of feature selection that is developed within an embedded approach known as Recursive Feature Elimination (RFE), proposed initially for SVM. The results show that several features can be eliminated using the multiple output support vector regressor with RFE without affecting the performance of the regressor. From the analysis of the features selected in smaller subsets via RFE, it can be observed that the signals that are more informative into the arousal and valence space discrimination are the EEG, Electrooculogram/Electromiogram (EOG/EMG) and the Galvanic Skin Response (GSR).

  17. Real-time quality monitoring in debutanizer column with regression tree and ANFIS

    NASA Astrophysics Data System (ADS)

    Siddharth, Kumar; Pathak, Amey; Pani, Ajaya Kumar

    2018-05-01

    A debutanizer column is an integral part of any petroleum refinery. Online composition monitoring of debutanizer column outlet streams is highly desirable in order to maximize the production of liquefied petroleum gas. In this article, data-driven models for debutanizer column are developed for real-time composition monitoring. The dataset used has seven process variables as inputs and the output is the butane concentration in the debutanizer column bottom product. The input-output dataset is divided equally into a training (calibration) set and a validation (testing) set. The training set data were used to develop fuzzy inference, adaptive neuro fuzzy (ANFIS) and regression tree models for the debutanizer column. The accuracy of the developed models were evaluated by simulation of the models with the validation dataset. It is observed that the ANFIS model has better estimation accuracy than other models developed in this work and many data-driven models proposed so far in the literature for the debutanizer column.

  18. A Predictor Analysis Framework for Surface Radiation Budget Reprocessing Using Design of Experiments

    NASA Astrophysics Data System (ADS)

    Quigley, Patricia Allison

    Earth's Radiation Budget (ERB) is an accounting of all incoming energy from the sun and outgoing energy reflected and radiated to space by earth's surface and atmosphere. The National Aeronautics and Space Administration (NASA)/Global Energy and Water Cycle Experiment (GEWEX) Surface Radiation Budget (SRB) project produces and archives long-term datasets representative of this energy exchange system on a global scale. The data are comprised of the longwave and shortwave radiative components of the system and is algorithmically derived from satellite and atmospheric assimilation products, and acquired atmospheric data. It is stored as 3-hourly, daily, monthly/3-hourly, and monthly averages of 1° x 1° grid cells. Input parameters used by the algorithms are a key source of variability in the resulting output data sets. Sensitivity studies have been conducted to estimate the effects this variability has on the output data sets using linear techniques. This entails varying one input parameter at a time while keeping all others constant or by increasing all input parameters by equal random percentages, in effect changing input values for every cell for every three hour period and for every day in each month. This equates to almost 11 million independent changes without ever taking into consideration the interactions or dependencies among the input parameters. A more comprehensive method is proposed here for the evaluating the shortwave algorithm to identify both the input parameters and parameter interactions that most significantly affect the output data. This research utilized designed experiments that systematically and simultaneously varied all of the input parameters of the shortwave algorithm. A D-Optimal design of experiments (DOE) was chosen to accommodate the 14 types of atmospheric properties computed by the algorithm and to reduce the number of trials required by a full factorial study from millions to 128. A modified version of the algorithm was made available for testing such that global calculations of the algorithm were tuned to accept information for a single temporal and spatial point and for one month of averaged data. The points were from each of four atmospherically distinct regions to include the Amazon Rainforest, Sahara Desert, Indian Ocean and Mt. Everest. The same design was used for all of the regions. Least squares multiple regression analysis of the results of the modified algorithm identified those parameters and parameter interactions that most significantly affected the output products. It was found that Cosine solar zenith angle was the strongest influence on the output data in all four regions. The interaction of Cosine Solar Zenith Angle and Cloud Fraction had the strongest influence on the output data in the Amazon, Sahara Desert and Mt. Everest Regions, while the interaction of Cloud Fraction and Cloudy Shortwave Radiance most significantly affected output data in the Indian Ocean region. Second order response models were built using the resulting regression coefficients. A Monte Carlo simulation of each model extended the probability distribution beyond the initial design trials to quantify variability in the modeled output data.

  19. Advances in Geoscience Modeling: Smart Modeling Frameworks, Self-Describing Models and the Role of Standardized Metadata

    NASA Astrophysics Data System (ADS)

    Peckham, Scott

    2016-04-01

    Over the last decade, model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that make it much easier for modelers to connect heterogeneous sets of process models in a plug-and-play manner to create composite "system models". These mechanisms greatly simplify code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing with standardized metadata. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can use the self description functions to learn about each process model in a collection to be coupled and then automatically call framework service components (e.g. regridders, time interpolators and unit converters) as necessary to mediate the differences between them so they can work together. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model or data set to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. Recent efforts to bring powerful uncertainty analysis and inverse modeling toolkits such as DAKOTA into modeling frameworks will also be described. This talk will conclude with an overview of several related modeling projects that have been funded by NSF's EarthCube initiative, namely the Earth System Bridge, OntoSoft and GeoSemantics projects.

  20. Climate Model Diagnostic Analyzer Web Service System

    NASA Astrophysics Data System (ADS)

    Lee, S.; Pan, L.; Zhai, C.; Tang, B.; Jiang, J. H.

    2013-12-01

    The latest Intergovernmental Panel on Climate Change (IPCC) Fourth Assessment Report stressed the need for the comprehensive and innovative evaluation of climate models with newly available global observations. The traditional approach to climate model evaluation, which compares a single parameter at a time, identifies symptomatic model biases and errors but fails to diagnose the model problems. The model diagnosis process requires physics-based multi-variable comparisons that typically involve large-volume and heterogeneous datasets, making them both computationally- and data-intensive. To address these challenges, we are developing a parallel, distributed web-service system that enables the physics-based multi-variable model performance evaluations and diagnoses through the comprehensive and synergistic use of multiple observational data, reanalysis data, and model outputs. We have developed a methodology to transform an existing science application code into a web service using a Python wrapper interface and Python web service frameworks (i.e., Flask, Gunicorn, and Tornado). The web-service system, called Climate Model Diagnostic Analyzer (CMDA), currently supports (1) all the datasets from Obs4MIPs and a few ocean datasets from NOAA and Argo, which can serve as observation-based reference data for model evaluation and (2) many of CMIP5 model outputs covering a broad range of atmosphere, ocean, and land variables from the CMIP5 specific historical runs and AMIP runs. Analysis capabilities currently supported by CMDA are (1) the calculation of annual and seasonal means of physical variables, (2) the calculation of time evolution of the means in any specified geographical region, (3) the calculation of correlation between two variables, and (4) the calculation of difference between two variables. A web user interface is chosen for CMDA because it not only lowers the learning curve and removes the adoption barrier of the tool but also enables instantaneous use, avoiding the hassle of local software installation and environment incompatibility. CMDA is planned to be used as an educational tool for the summer school organized by JPL's Center for Climate Science in 2014. The requirements of the educational tool are defined with the interaction with the school organizers, and CMDA is customized to meet the requirements accordingly. The tool needs to be production quality for 30+ simultaneous users. The summer school will thus serve as a valuable testbed for the tool development, preparing CMDA to serve the Earth-science modeling and model-analysis community at the end of the project. This work was funded by the NASA Earth Science Program called Computational Modeling Algorithms and Cyberinfrastructure (CMAC).

  1. Data-driven Modeling of Metal-oxide Sensors with Dynamic Bayesian Networks

    NASA Astrophysics Data System (ADS)

    Gosangi, Rakesh; Gutierrez-Osuna, Ricardo

    2011-09-01

    We present a data-driven probabilistic framework to model the transient response of MOX sensors modulated with a sequence of voltage steps. Analytical models of MOX sensors are usually built based on the physico-chemical properties of the sensing materials. Although building these models provides an insight into the sensor behavior, they also require a thorough understanding of the underlying operating principles. Here we propose a data-driven approach to characterize the dynamical relationship between sensor inputs and outputs. Namely, we use dynamic Bayesian networks (DBNs), probabilistic models that represent temporal relations between a set of random variables. We identify a set of control variables that influence the sensor responses, create a graphical representation that captures the causal relations between these variables, and finally train the model with experimental data. We validated the approach on experimental data in terms of predictive accuracy and classification performance. Our results show that DBNs can accurately predict the dynamic response of MOX sensors, as well as capture the discriminatory information present in the sensor transients.

  2. Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong

    Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less

  3. Basin-scale heterogeneity in Antarctic precipitation and its impact on surface mass variability

    DOE PAGES

    Fyke, Jeremy; Lenaerts, Jan T. M.; Wang, Hailong

    2017-11-15

    Annually averaged precipitation in the form of snow, the dominant term of the Antarctic Ice Sheet surface mass balance, displays large spatial and temporal variability. Here we present an analysis of spatial patterns of regional Antarctic precipitation variability and their impact on integrated Antarctic surface mass balance variability simulated as part of a preindustrial 1800-year global, fully coupled Community Earth System Model simulation. Correlation and composite analyses based on this output allow for a robust exploration of Antarctic precipitation variability. We identify statistically significant relationships between precipitation patterns across Antarctica that are corroborated by climate reanalyses, regional modeling and icemore » core records. These patterns are driven by variability in large-scale atmospheric moisture transport, which itself is characterized by decadal- to centennial-scale oscillations around the long-term mean. We suggest that this heterogeneity in Antarctic precipitation variability has a dampening effect on overall Antarctic surface mass balance variability, with implications for regulation of Antarctic-sourced sea level variability, detection of an emergent anthropogenic signal in Antarctic mass trends and identification of Antarctic mass loss accelerations.« less

  4. Statistics, Uncertainty, and Transmitted Variation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendelberger, Joanne Roth

    2014-11-05

    The field of Statistics provides methods for modeling and understanding data and making decisions in the presence of uncertainty. When examining response functions, variation present in the input variables will be transmitted via the response function to the output variables. This phenomenon can potentially have significant impacts on the uncertainty associated with results from subsequent analysis. This presentation will examine the concept of transmitted variation, its impact on designed experiments, and a method for identifying and estimating sources of transmitted variation in certain settings.

  5. Accounting for uncertainty in model-based prevalence estimation: paratuberculosis control in dairy herds.

    PubMed

    Davidson, Ross S; McKendrick, Iain J; Wood, Joanna C; Marion, Glenn; Greig, Alistair; Stevenson, Karen; Sharp, Michael; Hutchings, Michael R

    2012-09-10

    A common approach to the application of epidemiological models is to determine a single (point estimate) parameterisation using the information available in the literature. However, in many cases there is considerable uncertainty about parameter values, reflecting both the incomplete nature of current knowledge and natural variation, for example between farms. Furthermore model outcomes may be highly sensitive to different parameter values. Paratuberculosis is an infection for which many of the key parameter values are poorly understood and highly variable, and for such infections there is a need to develop and apply statistical techniques which make maximal use of available data. A technique based on Latin hypercube sampling combined with a novel reweighting method was developed which enables parameter uncertainty and variability to be incorporated into a model-based framework for estimation of prevalence. The method was evaluated by applying it to a simulation of paratuberculosis in dairy herds which combines a continuous time stochastic algorithm with model features such as within herd variability in disease development and shedding, which have not been previously explored in paratuberculosis models. Generated sample parameter combinations were assigned a weight, determined by quantifying the model's resultant ability to reproduce prevalence data. Once these weights are generated the model can be used to evaluate other scenarios such as control options. To illustrate the utility of this approach these reweighted model outputs were used to compare standard test and cull control strategies both individually and in combination with simple husbandry practices that aim to reduce infection rates. The technique developed has been shown to be applicable to a complex model incorporating realistic control options. For models where parameters are not well known or subject to significant variability, the reweighting scheme allowed estimated distributions of parameter values to be combined with additional sources of information, such as that available from prevalence distributions, resulting in outputs which implicitly handle variation and uncertainty. This methodology allows for more robust predictions from modelling approaches by allowing for parameter uncertainty and combining different sources of information, and is thus expected to be useful in application to a large number of disease systems.

  6. New climate change scenarios for the Netherlands.

    PubMed

    van den Hurk, B; Tank, A K; Lenderink, G; Ulden, A van; Oldenborgh, G J van; Katsman, C; Brink, H van den; Keller, F; Bessembinder, J; Burgers, G; Komen, G; Hazeleger, W; Drijfhout, S

    2007-01-01

    A new set of climate change scenarios for 2050 for the Netherlands was produced recently. The scenarios span a wide range of possible future climate conditions, and include climate variables that are of interest to a broad user community. The scenario values are constructed by combining output from an ensemble of recent General Climate Model (GCM) simulations, Regional Climate Model (RCM) output, meteorological observations and a touch of expert judgment. For temperature, precipitation, potential evaporation and wind four scenarios are constructed, encompassing ranges of both global mean temperature rise in 2050 and the strength of the response of the dominant atmospheric circulation in the area of interest to global warming. For this particular area, wintertime precipitation is seen to increase between 3.5 and 7% per degree global warming, but mean summertime precipitation shows opposite signs depending on the assumed response of the circulation regime. Annual maximum daily mean wind speed shows small changes compared to the observed (natural) variability of this variable. Sea level rise in the North Sea in 2100 ranges between 35 and 85 cm. Preliminary assessment of the impact of the new scenarios on water management and coastal defence policies indicate that particularly dry summer scenarios and increased intensity of extreme daily precipitation deserves additional attention in the near future.

  7. Structural equation modeling in environmental risk assessment.

    PubMed

    Buncher, C R; Succop, P A; Dietrich, K N

    1991-01-01

    Environmental epidemiology requires effective models that take individual observations of environmental factors and connect them into meaningful patterns. Single-factor relationships have given way to multivariable analyses; simple additive models have been augmented by multiplicative (logistic) models. Each of these steps has produced greater enlightenment and understanding. Models that allow for factors causing outputs that can affect later outputs with putative causation working at several different time points (e.g., linkage) are not commonly used in the environmental literature. Structural equation models are a class of covariance structure models that have been used extensively in economics/business and social science but are still little used in the realm of biostatistics. Path analysis in genetic studies is one simplified form of this class of models. We have been using these models in a study of the health and development of infants who have been exposed to lead in utero and in the postnatal home environment. These models require as input the directionality of the relationship and then produce fitted models for multiple inputs causing each factor and the opportunity to have outputs serve as input variables into the next phase of the simultaneously fitted model. Some examples of these models from our research are presented to increase familiarity with this class of models. Use of these models can provide insight into the effect of changing an environmental factor when assessing risk. The usual cautions concerning believing a model, believing causation has been proven, and the assumptions that are required for each model are operative.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Katherine H.; Cutler, Dylan S.; Olis, Daniel R.

    REopt is a techno-economic decision support model used to optimize energy systems for buildings, campuses, communities, and microgrids. The primary application of the model is for optimizing the integration and operation of behind-the-meter energy assets. This report provides an overview of the model, including its capabilities and typical applications; inputs and outputs; economic calculations; technology descriptions; and model parameters, variables, and equations. The model is highly flexible, and is continually evolving to meet the needs of each analysis. Therefore, this report is not an exhaustive description of all capabilities, but rather a summary of the core components of the model.

  9. Magnetically Controlled Variable Transformer

    NASA Technical Reports Server (NTRS)

    Kleiner, Charles T.

    1994-01-01

    Improved variable-transformer circuit, output voltage and current of which controlled by use of relatively small current supplied at relatively low power to control windings on its magnetic cores. Transformer circuits of this type called "magnetic amplifiers" because ratio between controlled output power and power driving control current of such circuit large. This ratio - power gain - can be as large as 100 in present circuit. Variable-transformer circuit offers advantages of efficiency, safety, and controllability over some prior variable-transformer circuits.

  10. 3D Visualization of Hydrological Model Outputs For a Better Understanding of Multi-Scale Phenomena

    NASA Astrophysics Data System (ADS)

    Richard, J.; Schertzer, D. J. M.; Tchiguirinskaia, I.

    2014-12-01

    During the last decades, many hydrological models has been created to simulate extreme events or scenarios on catchments. The classical outputs of these models are 2D maps, time series or graphs, which are easily understood by scientists, but not so much by many stakeholders, e.g. mayors or local authorities, and the general public. One goal of the Blue Green Dream project is to create outputs that are adequate for them. To reach this goal, we decided to convert most of the model outputs into a unique 3D visualization interface that combines all of them. This conversion has to be performed with an hydrological thinking to keep the information consistent with the context and the raw outputs.We focus our work on the conversion of the outputs of the Multi-Hydro (MH) model, which is physically based, fully distributed and with a GIS data interface. MH splits the urban water cycle into 4 components: the rainfall, the surface runoff, the infiltration and the drainage. To each of them, corresponds a modeling module with specific inputs and outputs. The superimposition of all this information will highlight the model outputs and help to verify the quality of the raw input data. For example, the spatial and the time variability of the rain generated by the rainfall module will be directly visible in 4D (3D + time) before running a full simulation. It is the same with the runoff module: because the result quality depends of the resolution of the rasterized land use, it will confirm or not the choice of the cell size.As most of the inputs and outputs are GIS files, two main conversions will be applied to display the results into 3D. First, a conversion from vector files to 3D objects. For example, buildings are defined in 2D inside a GIS vector file. Each polygon can be extruded with an height to create volumes. The principle is the same for the roads but an intrusion, instead of an extrusion, is done inside the topography file. The second main conversion is the raster conversion. Several files, such as the topography, the land use, the water depth, etc., are defined by geo-referenced grids. The corresponding grids are converted into a list of triangles to be displayed inside the 3D window. For the water depth, the display in pixels will not longer be the only solution. Creation of water contours will be done to more easily delineate the flood inside the catchment.

  11. A flatness-based control approach to drug infusion for cardiac function regulation

    NASA Astrophysics Data System (ADS)

    Rigatos, Gerasimos; Zervos, Nikolaos; Melkikh, Alexey

    2016-12-01

    A new control method based on differential flatness theory is developed in this article, aiming at solving the problem of regulation of haemodynamic parameters, Actually control of the cardiac output (volume of blood pumped out by heart per unit of time) and of the arterial blood pressure is achieved through the administered infusion of cardiovascular drugs, such as dopamine and sodium nitroprusside. Time delays between the control inputs and the system's outputs are taken into account. Using the principle of dynamic extension, which means that by considering certain control inputs and their derivatives as additional state variables, a state-space description for the heart's function is obtained. It is proven that the dynamic model of the heart is a differentially flat one. This enables its transformation into a linear canonical and decoupled form, for which the design of a stabilizing feedback controller becomes possible. The proposed feedback controller is of proven stability and assures fast and accurate tracking of the reference setpoints by the outputs of the heart's dynamic model. Moreover, by using a Kalman Filter-based disturbances' estimator, it becomes possible to estimate in real-time and compensate for the model uncertainty and external perturbation inputs that affect the heart's model.

  12. Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, R.; Klemm, H.A.; Battye, W.

    1979-04-01

    The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less

  13. Improving Diagnosis of Sepsis After Burn Injury Using a Portable Sepsis Alert System

    DTIC Science & Technology

    vital signs of heart rate variability, regional tissue oxygenation, and noninvasive cardiac output can diagnose burn sepsis earlier, reducing...morbidity and mortality. Rationale: Heart Rate Variability (HRV), regional Tissue Oxygenation, and non-invasive Cardiac Output (CO), have shown promise in

  14. Turbulence Variability in the Upper Layers of the Southern Adriatic Sea Under a Variety of Atmospheric Forcing Conditions

    DTIC Science & Technology

    2012-01-01

    Commission. Joint Research Centre. Space Applications Institute. Ispra/ltaly. Signell. R.P., Carniel. S„ Cavaleri, L. Chiggiato , J.. Doyle. J.D... Chiggiato . J.. Carniel. S.. 2008. Variational analysis of drifter positions and model outputs for the reconstruc- tions of surface currents in the

  15. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (The BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.

  16. Origins of extrinsic variability in eukaryotic gene expression

    NASA Astrophysics Data System (ADS)

    Volfson, Dmitri; Marciniak, Jennifer; Blake, William J.; Ostroff, Natalie; Tsimring, Lev S.; Hasty, Jeff

    2006-02-01

    Variable gene expression within a clonal population of cells has been implicated in a number of important processes including mutation and evolution, determination of cell fates and the development of genetic disease. Recent studies have demonstrated that a significant component of expression variability arises from extrinsic factors thought to influence multiple genes simultaneously, yet the biological origins of this extrinsic variability have received little attention. Here we combine computational modelling with fluorescence data generated from multiple promoter-gene inserts in Saccharomyces cerevisiae to identify two major sources of extrinsic variability. One unavoidable source arising from the coupling of gene expression with population dynamics leads to a ubiquitous lower limit for expression variability. A second source, which is modelled as originating from a common upstream transcription factor, exemplifies how regulatory networks can convert noise in upstream regulator expression into extrinsic noise at the output of a target gene. Our results highlight the importance of the interplay of gene regulatory networks with population heterogeneity for understanding the origins of cellular diversity.

  17. Origins of extrinsic variability in eukaryotic gene expression

    NASA Astrophysics Data System (ADS)

    Volfson, Dmitri; Marciniak, Jennifer; Blake, William J.; Ostroff, Natalie; Tsimring, Lev S.; Hasty, Jeff

    2006-03-01

    Variable gene expression within a clonal population of cells has been implicated in a number of important processes including mutation and evolution, determination of cell fates and the development of genetic disease. Recent studies have demonstrated that a significant component of expression variability arises from extrinsic factors thought to influence multiple genes in concert, yet the biological origins of this extrinsic variability have received little attention. Here we combine computational modeling with fluorescence data generated from multiple promoter-gene inserts in Saccharomyces cerevisiae to identify two major sources of extrinsic variability. One unavoidable source arising from the coupling of gene expression with population dynamics leads to a ubiquitous noise floor in expression variability. A second source which is modeled as originating from a common upstream transcription factor exemplifies how regulatory networks can convert noise in upstream regulator expression into extrinsic noise at the output of a target gene. Our results highlight the importance of the interplay of gene regulatory networks with population heterogeneity for understanding the origins of cellular diversity.

  18. SOS based robust H(∞) fuzzy dynamic output feedback control of nonlinear networked control systems.

    PubMed

    Chae, Seunghwan; Nguang, Sing Kiong

    2014-07-01

    In this paper, a methodology for designing a fuzzy dynamic output feedback controller for discrete-time nonlinear networked control systems is presented where the nonlinear plant is modelled by a Takagi-Sugeno fuzzy model and the network-induced delays by a finite state Markov process. The transition probability matrix for the Markov process is allowed to be partially known, providing a more practical consideration of the real world. Furthermore, the fuzzy controller's membership functions and premise variables are not assumed to be the same as the plant's membership functions and premise variables, that is, the proposed approach can handle the case, when the premise of the plant are not measurable or delayed. The membership functions of the plant and the controller are approximated as polynomial functions, then incorporated into the controller design. Sufficient conditions for the existence of the controller are derived in terms of sum of square inequalities, which are then solved by YALMIP. Finally, a numerical example is used to demonstrate the validity of the proposed methodology.

  19. Output Feedback Distributed Containment Control for High-Order Nonlinear Multiagent Systems.

    PubMed

    Li, Yafeng; Hua, Changchun; Wu, Shuangshuang; Guan, Xinping

    2017-01-31

    In this paper, we study the problem of output feedback distributed containment control for a class of high-order nonlinear multiagent systems under a fixed undirected graph and a fixed directed graph, respectively. Only the output signals of the systems can be measured. The novel reduced order dynamic gain observer is constructed to estimate the unmeasured state variables of the system with the less conservative condition on nonlinear terms than traditional Lipschitz one. Via the backstepping method, output feedback distributed nonlinear controllers for the followers are designed. By means of the novel first virtual controllers, we separate the estimated state variables of different agents from each other. Consequently, the designed controllers show independence on the estimated state variables of neighbors except outputs information, and the dynamics of each agent can be greatly different, which make the design method have a wider class of applications. Finally, a numerical simulation is presented to illustrate the effectiveness of the proposed method.

  20. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  1. Use of collateral information to improve LANDSAT classification accuracies

    NASA Technical Reports Server (NTRS)

    Strahler, A. H. (Principal Investigator)

    1981-01-01

    Methods to improve LANDSAT classification accuracies were investigated including: (1) the use of prior probabilities in maximum likelihood classification as a methodology to integrate discrete collateral data with continuously measured image density variables; (2) the use of the logit classifier as an alternative to multivariate normal classification that permits mixing both continuous and categorical variables in a single model and fits empirical distributions of observations more closely than the multivariate normal density function; and (3) the use of collateral data in a geographic information system as exercised to model a desired output information layer as a function of input layers of raster format collateral and image data base layers.

  2. Optimal Frequency-Domain System Realization with Weighting

    NASA Technical Reports Server (NTRS)

    Juang, Jer-Nan; Maghami, Peiman G.

    1999-01-01

    Several approaches are presented to identify an experimental system model directly from frequency response data. The formulation uses a matrix-fraction description as the model structure. Frequency weighting such as exponential weighting is introduced to solve a weighted least-squares problem to obtain the coefficient matrices for the matrix-fraction description. A multi-variable state-space model can then be formed using the coefficient matrices of the matrix-fraction description. Three different approaches are introduced to fine-tune the model using nonlinear programming methods to minimize the desired cost function. The first method uses an eigenvalue assignment technique to reassign a subset of system poles to improve the identified model. The second method deals with the model in the real Schur or modal form, reassigns a subset of system poles, and adjusts the columns (rows) of the input (output) influence matrix using a nonlinear optimizer. The third method also optimizes a subset of poles, but the input and output influence matrices are refined at every optimization step through least-squares procedures.

  3. Synchronized Trajectories in a Climate "Supermodel"

    NASA Astrophysics Data System (ADS)

    Duane, Gregory; Schevenhoven, Francine; Selten, Frank

    2017-04-01

    Differences in climate projections among state-of-the-art models can be resolved by connecting the models in run-time, either through inter-model nudging or by directly combining the tendencies for corresponding variables. Since it is clearly established that averaging model outputs typically results in improvement as compared to any individual model output, averaged re-initializations at typical analysis time intervals also seems appropriate. The resulting "supermodel" is more like a single model than it is like an ensemble, because the constituent models tend to synchronize even with limited inter-model coupling. Thus one can examine the properties of specific trajectories, rather than averaging the statistical properties of the separate models. We apply this strategy to a study of the index cycle in a supermodel constructed from several imperfect copies of the SPEEDO model (a global primitive-equation atmosphere-ocean-land climate model). As with blocking frequency, typical weather statistics of interest like probabilities of heat waves or extreme precipitation events, are improved as compared to the standard multi-model ensemble approach. In contrast to the standard approach, the supermodel approach provides detailed descriptions of typical actual events.

  4. Field studies in support of Nimbus-E surface composition mapping radiometer

    NASA Technical Reports Server (NTRS)

    Lyon, R. J. P.; Green, A. A.

    1973-01-01

    If the outputs of the two channels are spacially registered and combined to generate a third variable which reflects the differences between the two outputs, then this variable can then be redisplayed in image form and its magnitude should be relatable to the silica content of the rocks imaged. Two methods were proposed for generating this third variable, the first is to take the difference in apparent temperature between the two channels and the second is to ratio the voltage outputs of the two channels. The responses of the two channel high resolution surface composition mapping radiometer and the thermal channels of the MSDS scanner were calculated from data recorded with the NASA IR pallet and simulate the output of these systems had they been flying over the same targets as the IR pallet.

  5. Spatial Variability of Trace Gases During DISCOVER-AQ: Planning for Geostationary Observations of Atmospheric Composition

    NASA Technical Reports Server (NTRS)

    Follette-Cook, Melanie B.; Pickering, K.; Crawford, J.; Appel, W.; Diskin, G.; Fried, A.; Loughner, C.; Pfister, G.; Weinheimer, A.

    2015-01-01

    Results from an in-depth analysis of trace gas variability in MD indicated that the variability in this region was large enough to be observable by a TEMPO-like instrument. The variability observed in MD is relatively similar to the other three campaigns with a few exceptions: CO variability in CA was much higher than in the other regions; HCHO variability in CA and CO was much lower; MD showed the lowest variability in NO2All model simulations do a reasonable job simulating O3 variability. For CO, the CACO simulations largely under over estimate the variability in the observations. The variability in HCHO is underestimated for every campaign. NO2 variability is slightly overestimated in MD, more so in CO. The TX simulation underestimates the variability in each trace gas. This is most likely due to missing emissions sources (C. Loughner, manuscript in preparation).Future Work: Where reasonable, we will use these model outputs to further explore the resolvability from space of these key trace gases using analyses of tropospheric column amounts relative to satellite precision requirements, similar to Follette-Cook et al. (2015).

  6. Intensity-level assessment of lower body plyometric exercises based on mechanical output of lower limb joints.

    PubMed

    Sugisaki, Norihide; Okada, Junichi; Kanehisa, Hiroaki

    2013-01-01

    The present study aimed to quantify the intensity of lower extremity plyometric exercises by determining joint mechanical output. Ten men (age, 27.3 ± 4.1 years; height, 173.6 ± 5.4 cm; weight, 69.4 ± 6.0 kg; 1-repetition maximum [1RM] load in back squat 118.5 ± 12.0 kg) performed the following seven plyometric exercises: two-foot ankle hop, repeated squat jump, double-leg hop, depth jumps from 30 and 60 cm, and single-leg and double-leg tuck jumps. Mechanical output variables (torque, angular impulse, power, and work) at the lower limb joints were determined using inverse-dynamics analysis. For all measured variables, ANOVA revealed significant main effects of exercise type for all joints (P < 0.05) along with significant interactions between joint and exercise (P < 0.01), indicating that the influence of exercise type on mechanical output varied among joints. Paired comparisons revealed that there were marked differences in mechanical output at the ankle and hip joints; most of the variables at the ankle joint were greatest for two-foot ankle hop and tuck jumps, while most hip joint variables were greatest for repeated squat jump or double-leg hop. The present results indicate the necessity for determining mechanical output for each joint when evaluating the intensity of plyometric exercises.

  7. Developing an approach to effectively use super ensemble experiments for the projection of hydrological extremes under climate change

    NASA Astrophysics Data System (ADS)

    Watanabe, S.; Kim, H.; Utsumi, N.

    2017-12-01

    This study aims to develop a new approach which projects hydrology under climate change using super ensemble experiments. The use of multiple ensemble is essential for the estimation of extreme, which is a major issue in the impact assessment of climate change. Hence, the super ensemble experiments are recently conducted by some research programs. While it is necessary to use multiple ensemble, the multiple calculations of hydrological simulation for each output of ensemble simulations needs considerable calculation costs. To effectively use the super ensemble experiments, we adopt a strategy to use runoff projected by climate models directly. The general approach of hydrological projection is to conduct hydrological model simulations which include land-surface and river routing process using atmospheric boundary conditions projected by climate models as inputs. This study, on the other hand, simulates only river routing model using runoff projected by climate models. In general, the climate model output is systematically biased so that a preprocessing which corrects such bias is necessary for impact assessments. Various bias correction methods have been proposed, but, to the best of our knowledge, no method has proposed for variables other than surface meteorology. Here, we newly propose a method for utilizing the projected future runoff directly. The developed method estimates and corrects the bias based on the pseudo-observation which is a result of retrospective offline simulation. We show an application of this approach to the super ensemble experiments conducted under the program of Half a degree Additional warming, Prognosis and Projected Impacts (HAPPI). More than 400 ensemble experiments from multiple climate models are available. The results of the validation using historical simulations by HAPPI indicates that the output of this approach can effectively reproduce retrospective runoff variability. Likewise, the bias of runoff from super ensemble climate projections is corrected, and the impact of climate change on hydrologic extremes is assessed in a cost-efficient way.

  8. Dynamic control of remelting processes

    DOEpatents

    Bertram, Lee A.; Williamson, Rodney L.; Melgaard, David K.; Beaman, Joseph J.; Evans, David G.

    2000-01-01

    An apparatus and method of controlling a remelting process by providing measured process variable values to a process controller; estimating process variable values using a process model of a remelting process; and outputting estimated process variable values from the process controller. Feedback and feedforward control devices receive the estimated process variable values and adjust inputs to the remelting process. Electrode weight, electrode mass, electrode gap, process current, process voltage, electrode position, electrode temperature, electrode thermal boundary layer thickness, electrode velocity, electrode acceleration, slag temperature, melting efficiency, cooling water temperature, cooling water flow rate, crucible temperature profile, slag skin temperature, and/or drip short events are employed, as are parameters representing physical constraints of electroslag remelting or vacuum arc remelting, as applicable.

  9. Systems Engineering-Based Tool for Identifying Critical Research Systems

    ERIC Educational Resources Information Center

    Abbott, Rodman P.; Stracener, Jerrell

    2016-01-01

    This study investigates the relationship between the designated research project system independent variables of Labor, Travel, Equipment, and Contract total annual costs and the dependent variables of both the associated matching research project total annual academic publication output and thesis/dissertation number output. The Mahalanobis…

  10. Mechanisms of the 40-70 Day Variability in the Yucatan Channel Volume Transport

    NASA Astrophysics Data System (ADS)

    van Westen, René M.; Dijkstra, Henk A.; Klees, Roland; Riva, Riccardo E. M.; Slobbe, D. Cornelis; van der Boog, Carine G.; Katsman, Caroline A.; Candy, Adam S.; Pietrzak, Julie D.; Zijlema, Marcel; James, Rebecca K.; Bouma, Tjeerd J.

    2018-02-01

    The Yucatan Channel connects the Caribbean Sea with the Gulf of Mexico and is the main outflow region of the Caribbean Sea. Moorings in the Yucatan Channel show high-frequent variability in kinetic energy (50-100 days) and transport (20-40 days), but the physical mechanisms controlling this variability are poorly understood. In this study, we show that the short-term variability in the Yucatan Channel transport has an upstream origin and arises from processes in the North Brazil Current. To establish this connection, we use data from altimetry and model output from several high resolution global models. A significant 40-70 day variability is found in the sea surface height in the North Brazil Current retroflection region with a propagation toward the Lesser Antilles. The frequency of variability is generated by intrinsic processes associated with the shedding of eddies, rather than by atmospheric forcing. This sea surface height variability is able to pass the Lesser Antilles, it propagates westward with the background ocean flow in the Caribbean Sea and finally affects the variability in the Yucatan Channel volume transport.

  11. Statistical surrogate models for prediction of high-consequence climate change.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Constantine, Paul; Field, Richard V., Jr.; Boslough, Mark Bruce Elrick

    2011-09-01

    In safety engineering, performance metrics are defined using probabilistic risk assessments focused on the low-probability, high-consequence tail of the distribution of possible events, as opposed to best estimates based on central tendencies. We frame the climate change problem and its associated risks in a similar manner. To properly explore the tails of the distribution requires extensive sampling, which is not possible with existing coupled atmospheric models due to the high computational cost of each simulation. We therefore propose the use of specialized statistical surrogate models (SSMs) for the purpose of exploring the probability law of various climate variables of interest.more » A SSM is different than a deterministic surrogate model in that it represents each climate variable of interest as a space/time random field. The SSM can be calibrated to available spatial and temporal data from existing climate databases, e.g., the Program for Climate Model Diagnosis and Intercomparison (PCMDI), or to a collection of outputs from a General Circulation Model (GCM), e.g., the Community Earth System Model (CESM) and its predecessors. Because of its reduced size and complexity, the realization of a large number of independent model outputs from a SSM becomes computationally straightforward, so that quantifying the risk associated with low-probability, high-consequence climate events becomes feasible. A Bayesian framework is developed to provide quantitative measures of confidence, via Bayesian credible intervals, in the use of the proposed approach to assess these risks.« less

  12. The influence of the way the muscle force is modeled on the predicted results obtained by solving indeterminate problems for a fast elbow flexion.

    PubMed

    Raikova, Rositsa; Aladjov, Hristo

    2003-06-01

    A critical point in models of the human limbs when the aim is to investigate the motor control is the muscle model. More often the mechanical output of a muscle is considered as one musculotendon force that is a design variable in optimization tasks solved predominantly by static optimization. For dynamic conditions, the relationship between the developed force, the length and the contraction velocity of a muscle becomes important and rheological muscle models can be incorporated in the optimization tasks. Here the muscle activation can be a design variable as well. Recently a new muscle model was proposed. A muscle is considered as a mixture of motor units (MUs) with different peculiarities and the muscle force is calculated as a sum of the MUs twitches. The aim of the paper is to compare these three ways for presenting the muscle force. Fast elbow flexion is investigated using a planar model with five muscles. It is concluded that the rheological models are suitable for calculation of the current maximal muscle forces that can be used as weight factors in the objective functions. The model based on MUs has many advantages for precise investigations of motor control. Such muscle presentation can explain the muscle co-contraction and the role of the fast and the slow MUs. The relationship between the MUs activation and the mechanical output is more clear and closer to the reality.

  13. A probabilistic method for constructing wave time-series at inshore locations using model scenarios

    USGS Publications Warehouse

    Long, Joseph W.; Plant, Nathaniel G.; Dalyander, P. Soupy; Thompson, David M.

    2014-01-01

    Continuous time-series of wave characteristics (height, period, and direction) are constructed using a base set of model scenarios and simple probabilistic methods. This approach utilizes an archive of computationally intensive, highly spatially resolved numerical wave model output to develop time-series of historical or future wave conditions without performing additional, continuous numerical simulations. The archive of model output contains wave simulations from a set of model scenarios derived from an offshore wave climatology. Time-series of wave height, period, direction, and associated uncertainties are constructed at locations included in the numerical model domain. The confidence limits are derived using statistical variability of oceanographic parameters contained in the wave model scenarios. The method was applied to a region in the northern Gulf of Mexico and assessed using wave observations at 12 m and 30 m water depths. Prediction skill for significant wave height is 0.58 and 0.67 at the 12 m and 30 m locations, respectively, with similar performance for wave period and direction. The skill of this simplified, probabilistic time-series construction method is comparable to existing large-scale, high-fidelity operational wave models but provides higher spatial resolution output at low computational expense. The constructed time-series can be developed to support a variety of applications including climate studies and other situations where a comprehensive survey of wave impacts on the coastal area is of interest.

  14. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, Norbert; Bader, Jon B.

    2009-01-01

    Calibration data of a wind tunnel sting balance was processed using a search algorithm that identifies an optimized regression model for the data analysis. The selected sting balance had two moment gages that were mounted forward and aft of the balance moment center. The difference and the sum of the two gage outputs were fitted in the least squares sense using the normal force and the pitching moment at the balance moment center as independent variables. The regression model search algorithm predicted that the difference of the gage outputs should be modeled using the intercept and the normal force. The sum of the two gage outputs, on the other hand, should be modeled using the intercept, the pitching moment, and the square of the pitching moment. Equations of the deflection of a cantilever beam are used to show that the search algorithm s two recommended math models can also be obtained after performing a rigorous theoretical analysis of the deflection of the sting balance under load. The analysis of the sting balance calibration data set is a rare example of a situation when regression models of balance calibration data can directly be derived from first principles of physics and engineering. In addition, it is interesting to see that the search algorithm recommended the same regression models for the data analysis using only a set of statistical quality metrics.

  15. Analysis of Sting Balance Calibration Data Using Optimized Regression Models

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.; Bader, Jon B.

    2010-01-01

    Calibration data of a wind tunnel sting balance was processed using a candidate math model search algorithm that recommends an optimized regression model for the data analysis. During the calibration the normal force and the moment at the balance moment center were selected as independent calibration variables. The sting balance itself had two moment gages. Therefore, after analyzing the connection between calibration loads and gage outputs, it was decided to choose the difference and the sum of the gage outputs as the two responses that best describe the behavior of the balance. The math model search algorithm was applied to these two responses. An optimized regression model was obtained for each response. Classical strain gage balance load transformations and the equations of the deflection of a cantilever beam under load are used to show that the search algorithm s two optimized regression models are supported by a theoretical analysis of the relationship between the applied calibration loads and the measured gage outputs. The analysis of the sting balance calibration data set is a rare example of a situation when terms of a regression model of a balance can directly be derived from first principles of physics. In addition, it is interesting to note that the search algorithm recommended the correct regression model term combinations using only a set of statistical quality metrics that were applied to the experimental data during the algorithm s term selection process.

  16. System for computer controlled shifting of an automatic transmission

    DOEpatents

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determine from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  17. Closed loop computer control for an automatic transmission

    DOEpatents

    Patil, Prabhakar B.

    1989-01-01

    In an automotive vehicle having an automatic transmission that driveably connects a power source to the driving wheels, a method to control the application of hydraulic pressure to a clutch, whose engagement produces an upshift and whose disengagement produces a downshift, the speed of the power source, and the output torque of the transmission. The transmission output shaft torque and the power source speed are the controlled variables. The commanded power source torque and commanded hydraulic pressure supplied to the clutch are the control variables. A mathematical model is formulated that describes the kinematics and dynamics of the powertrain before, during and after a gear shift. The model represents the operating characteristics of each component and the structural arrangement of the components within the transmission being controlled. Next, a close loop feedback control is developed to determine the proper control law or compensation strategy to achieve an acceptably smooth gear ratio change, one in which the output torque disturbance is kept to a minimum and the duration of the shift is minimized. Then a computer algorithm simulating the shift dynamics employing the mathematical model is used to study the effects of changes in the values of the parameters established from a closed loop control of the clutch hydraulic and the power source torque on the shift quality. This computer simulation is used also to establish possible shift control strategies. The shift strategies determined from the prior step are reduced to an algorithm executed by a computer to control the operation of the power source and the transmission.

  18. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.

  19. Complex Dynamics in a Triopoly Game with Multiple Delays in the Competition of Green Product Level

    NASA Astrophysics Data System (ADS)

    Si, Fengshan; Ma, Junhai

    Research on the output game behavior of oligopoly has greatly advanced in recent years. But many unknowns remain, particularly the influence of consumers’ willingness to buy green products on the oligopoly output game. This paper constructs a triopoly output game model with multiple delays in the competition of green products. The influence of the parameters on the stability and complexity of the system is studied by analyzing the existence and local asymptotic stability of the equilibrium point. It is found that the system loses stability and increases complexity if delay parameters exceed a certain range. In the unstable or chaotic game market, the decisions of oligopoly will be counterproductive. It is also observed that the influence of weight and output adjustment speed on the firm itself is obviously stronger than the influence of other firms. In addition, it is important that weight and output adjustment speed cannot increase indefinitely, otherwise it will bring unnecessary losses to the firm. Finally, chaos control is realized by using the variable feedback control method. The research results of this paper can provide a reference for decision-making for the output of the game of oligopoly.

  20. The Productivity Analysis of Chennai Automotive Industry Cluster

    NASA Astrophysics Data System (ADS)

    Bhaskaran, E.

    2014-07-01

    Chennai, also called the Detroit of India, is India's second fastest growing auto market and exports auto components and vehicles to US, Germany, Japan and Brazil. For inclusive growth and sustainable development, 250 auto component industries in Ambattur, Thirumalisai and Thirumudivakkam Industrial Estates located in Chennai have adopted the Cluster Development Approach called Automotive Component Cluster. The objective is to study the Value Chain, Correlation and Data Envelopment Analysis by determining technical efficiency, peer weights, input and output slacks of 100 auto component industries in three estates. The methodology adopted is using Data Envelopment Analysis of Output Oriented Banker Charnes Cooper model by taking net worth, fixed assets, employment as inputs and gross output as outputs. The non-zero represents the weights for efficient clusters. The higher slack obtained reveals the excess net worth, fixed assets, employment and shortage in gross output. To conclude, the variables are highly correlated and the inefficient industries should increase their gross output or decrease the fixed assets or employment. Moreover for sustainable development, the cluster should strengthen infrastructure, technology, procurement, production and marketing interrelationships to decrease costs and to increase productivity and efficiency to compete in the indigenous and export market.

  1. Geography and the costs of urban energy infrastructure: The case of electricity and natural gas capital investments

    NASA Astrophysics Data System (ADS)

    Senyel, Muzeyyen Anil

    Investments in the urban energy infrastructure for distributing electricity and natural gas are analyzed using (1) property data measuring distribution plant value at the local/tax district level, and (2) system outputs such as sectoral numbers of customers and energy sales, input prices, company-specific characteristics such as average wages and load factor. Socio-economic and site-specific urban and geographic variables, however, often been neglected in past studies. The purpose of this research is to incorporate these site-specific characteristics of electricity and natural gas distribution into investment cost model estimations. These local characteristics include (1) socio-economic variables, such as income and wealth; (2) urban-related variables, such as density, land-use, street pattern, housing pattern; (3) geographic and environmental variables, such as soil, topography, and weather, and (4) company-specific characteristics such as average wages, and load factor. The classical output variables include residential and commercial-industrial customers and sales. In contrast to most previous research, only capital investments at the local level are considered. In addition to aggregate cost modeling, the analysis focuses on the investment costs for the system components: overhead conductors, underground conductors, conduits, poles, transformers, services, street lighting, and station equipment for electricity distribution; and mains, services, regular and industrial measurement and regulation stations for natural gas distribution. The Box-Cox, log-log and additive models are compared to determine the best fitting cost functions. The Box-Cox form turns out to be superior to the other forms at the aggregate level and for network components. However, a linear additive form provides a better fit for end-user related components. The results show that, in addition to output variables and company-specific variables, various site-specific variables are statistically significant at the aggregate and disaggregate levels. Local electricity and natural gas distribution networks are characterized by a natural monopoly cost structure and economies of scale and density. The results provide evidence for the economies of scale and density for the aggregate electricity and natural gas distribution systems. However, distribution components have varying economic characteristics. The backbones of the networks (overhead conductors for electricity, and mains for natural gas) display economies of scale and density, but services in both systems and street lighting display diseconomies of scale and diseconomies of density. Finally multi-utility network cost analyses are presented for aggregate and disaggregate electricity and natural gas capital investments. Economies of scope analyses investigate whether providing electricity and natural gas jointly is economically advantageous, as compared to providing these products separately. Significant economies of scope are observed for both the total network and the underground capital investments.

  2. Evaluation of Deep Learning Models for Predicting CO2 Flux

    NASA Astrophysics Data System (ADS)

    Halem, M.; Nguyen, P.; Frankel, D.

    2017-12-01

    Artificial neural networks have been employed to calculate surface flux measurements from station data because they are able to fit highly nonlinear relations between input and output variables without knowing the detail relationships between the variables. However, the accuracy in performing neural net estimates of CO2 flux from observations of CO2 and other atmospheric variables is influenced by the architecture of the neural model, the availability, and complexity of interactions between physical variables such as wind, temperature, and indirect variables like latent heat, and sensible heat, etc. We evaluate two deep learning models, feed forward and recurrent neural network models to learn how they each respond to the physical measurements, time dependency of the measurements of CO2 concentration, humidity, pressure, temperature, wind speed etc. for predicting the CO2 flux. In this paper, we focus on a) building neural network models for estimating CO2 flux based on DOE data from tower Atmospheric Radiation Measurement data; b) evaluating the impact of choosing the surface variables and model hyper-parameters on the accuracy and predictions of surface flux; c) assessing the applicability of the neural network models on estimate CO2 flux by using OCO-2 satellite data; d) studying the efficiency of using GPU-acceleration for neural network performance using IBM Power AI deep learning software and packages on IBM Minsky system.

  3. Networked iterative learning control design for discrete-time systems with stochastic communication delay in input and output channels

    NASA Astrophysics Data System (ADS)

    Liu, Jian; Ruan, Xiaoe

    2017-07-01

    This paper develops two kinds of derivative-type networked iterative learning control (NILC) schemes for repetitive discrete-time systems with stochastic communication delay occurred in input and output channels and modelled as 0-1 Bernoulli-type stochastic variable. In the two schemes, the delayed signal of the current control input is replaced by the synchronous input utilised at the previous iteration, whilst for the delayed signal of the system output the one scheme substitutes it by the synchronous predetermined desired trajectory and the other takes it by the synchronous output at the previous operation, respectively. In virtue of the mathematical expectation, the tracking performance is analysed which exhibits that for both the linear time-invariant and nonlinear affine systems the two kinds of NILCs are convergent under the assumptions that the probabilities of communication delays are adequately constrained and the product of the input-output coupling matrices is full-column rank. Last, two illustrative examples are presented to demonstrate the effectiveness and validity of the proposed NILC schemes.

  4. Emulation and Sensitivity Analysis of the Community Multiscale Air Quality Model for a UK Ozone Pollution Episode.

    PubMed

    Beddows, Andrew V; Kitwiroon, Nutthida; Williams, Martin L; Beevers, Sean D

    2017-06-06

    Gaussian process emulation techniques have been used with the Community Multiscale Air Quality model, simulating the effects of input uncertainties on ozone and NO 2 output, to allow robust global sensitivity analysis (SA). A screening process ranked the effect of perturbations in 223 inputs, isolating the 30 most influential from emissions, boundary conditions (BCs), and reaction rates. Community Multiscale Air Quality (CMAQ) simulations of a July 2006 ozone pollution episode in the UK were made with input values for these variables plus ozone dry deposition velocity chosen according to a 576 point Latin hypercube design. Emulators trained on the output of these runs were used in variance-based SA of the model output to input uncertainties. Performing these analyses for every hour of a 21 day period spanning the episode and several days on either side allowed the results to be presented as a time series of sensitivity coefficients, showing how the influence of different input uncertainties changed during the episode. This is one of the most complex models to which these methods have been applied, and here, they reveal detailed spatiotemporal patterns of model sensitivities, with NO and isoprene emissions, NO 2 photolysis, ozone BCs, and deposition velocity being among the most influential input uncertainties.

  5. Unsteady Aerodynamic Testing Using the Dynamic Plunge Pitch and Roll Model Mount

    NASA Technical Reports Server (NTRS)

    Lutze, Frederick H.; Fan, Yigang

    1999-01-01

    A final report on the DyPPiR tests that were run are presented. Essentially it consists of two parts, a description of the data reduction techniques and the results. The data reduction techniques include three methods that were considered: 1) signal processing of wind on - wind off data; 2) using wind on data in conjunction with accelerometer measurements; and 3) using a dynamic model of the sting to predict the sting oscillations and determining the aerodynamic inputs using an optimization process. After trying all three, we ended up using method 1, mainly because of its simplicity and our confidence in its accuracy. The results section consists of time history plots of the input variables (angle of attack, roll angle, and/or plunge position) and the corresponding time histories of the output variables, C(sub L), C(sub D), C(sub m), C(sub l), C(sub m), C(sub n). Also included are some phase plots of one or more of the output variable vs. an input variable. Typically of interest are pitch moment coefficient vs. angle of attack for an oscillatory motion where the hysteresis loops can be observed. These plots are useful to determine the "more interesting" cases. Samples of the data as it appears on the disk are presented at the end of the report. The last maneuver, a rolling pull up, is indicative of the unique capabilities of the DyPPiR, allowing combinations of motions to be exercised at the same time.

  6. Phase inverter provides variable reference push-pull output

    NASA Technical Reports Server (NTRS)

    1966-01-01

    Dual-transistor difference amplifier provides a push-pull output referenced to a dc potential which can be varied without affecting the signal levels. The amplifier is coupled with a feedback circuit which can vary the operating points of the transistors by equal amounts to provide the variable reference potentials.

  7. Spray outputs from a variable-rate sprayer manipulated with PWM solenoid valves

    USDA-ARS?s Scientific Manuscript database

    Pressure fluctuations during variable-rate spray applications can affect nozzle flow rate fluctuations, resulting in spray outputs that do not coincide with the prescribed canopy structure volume. Variations in total flow rate discharged from 40 nozzles, each coupled with a pulse-width-modulated (PW...

  8. A Comparative Analysis of the Efficiency of National Education Systems

    ERIC Educational Resources Information Center

    Thieme, Claudio; Gimenez, Victor; Prior, Diego

    2012-01-01

    The present study assesses the performance of 54 participating countries in PISA 2006. It employs efficiency indicators that relate result variables with resource variables used in the production of educational services. Desirable outputs of educational achievement and undesirable outputs of educational inequality are considered jointly as result…

  9. Mars-GRAM Applications for Mars Science Laboratory Mission Site Selection Processes

    NASA Technical Reports Server (NTRS)

    Justh, Hilary; Justus, C. G.

    2007-01-01

    An overview is presented of the Mars-Global Reference Atmospheric Model (Mars-GRAM 2005) and its new features. One important new feature is the "auxiliary profile" option, whereby a simple input file is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Results are presented using auxiliary profiles produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) for three candidate Mars Science Laboratory (MSL) landing sites (Terby Crater, Melas Chasma, and Gale Crater). A global Thermal Emission Spectrometer (TES) database has also been generated for purposes of making 'Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude bins and 15 degree L(sub S) bins, for each of three Mars years of TES nadir data. Comparisons show reasonably good consistency between Mars-GRAM with low dust optical depth and both TES observed and mesoscale model simulated density at the three study sites. Mean winds differ by a more significant degree. Comparisons of mesoscale and TES standard deviations' with conventional Mars-GRAM values, show that Mars-GRAM density perturbations are somewhat conservative (larger than observed variability), while mesoscale-modeled wind variations are larger than Mars-GRAM model estimates. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  10. Multi-step rhodopsin inactivation schemes can account for the size variability of single photon responses in Limulus ventral photoreceptors

    PubMed Central

    1994-01-01

    Limulus ventral photoreceptors generate highly variable responses to the absorption of single photons. We have obtained data on the size distribution of these responses, derived the distribution predicted from simple transduction cascade models and compared the theory and data. In the simplest of models, the active state of the visual pigment (defined by its ability to activate G protein) is turned off in a single reaction. The output of such a cascade is predicted to be highly variable, largely because of stochastic variation in the number of G proteins activated. The exact distribution predicted is exponential, but we find that an exponential does not adequately account for the data. The data agree much better with the predictions of a cascade model in which the active state of the visual pigment is turned off by a multi-step process. PMID:8057085

  11. Modelling of subsonic COIL with an arbitrary magnetic modulation

    NASA Astrophysics Data System (ADS)

    Beránek, Jaroslav; Rohlena, Karel

    2007-05-01

    The concept of 1D subsonic COIL model with a mixing length was generalized to include the influence of a variable magnetic field on the stimulated emission cross-section. Equations describing the chemical kinetics were solved taking into account together with the gas temperature also a simplified mixing model of oxygen and iodine molecules. With the external time variable magnetic field the model is no longer stationary. A transformation in the system moving with the mixture reduces partial differential equations to ordinary equations in time with initial conditions given either by the stationary flow at the moment when the magnetic field is switched on combined with the boundary conditions at the injector. Advantage of this procedure is a possibility to consider an arbitrary temporal dependence of the imposed magnetic field and to calculate directly the response of the laser output. The method was applied to model the experimental data measured with the subsonic version of the COIL device in the Institute of Physics, Prague, where the applied magnetic field had a saw-tooth dependence. We found that various values characterizing the laser performance, such as the power density distribution over the active zone cross-section, may have a fairly complicated structure given by combined effects of the delayed reaction to the magnetic switching and the flow velocity. This is necessarily translated in a time dependent spatial inhomogeneity of output beam intensity profile.

  12. Applications of Mars Global Reference Atmospheric Model (Mars-GRAM 2005) Supporting Mission Site Selection for Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. One new feature of Mars-GRAM 2005 is the 'auxiliary profile' option. In this option, an input file of temperature and density versus altitude is used to replace mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. An auxiliary profile can be generated from any source of data or alternate model output. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5)model) and a global Thermal Emission Spectrometer(TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components,averaged over 5-by-5 degree latitude-longitude bins and 15 degree L(s) bins, for each of three Mars years of TES nadir data. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate Mars Science Laboratory (MSL) landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  13. The 'Critical Power' Concept: Applications to Sports Performance with a Focus on Intermittent High-Intensity Exercise.

    PubMed

    Jones, Andrew M; Vanhatalo, Anni

    2017-03-01

    The curvilinear relationship between power output and the time for which it can be sustained is a fundamental and well-known feature of high-intensity exercise performance. This relationship 'levels off' at a 'critical power' (CP) that separates power outputs that can be sustained with stable values of, for example, muscle phosphocreatine, blood lactate, and pulmonary oxygen uptake ([Formula: see text]), from power outputs where these variables change continuously with time until their respective minimum and maximum values are reached and exercise intolerance occurs. The amount of work that can be done during exercise above CP (the so-called W') is constant but may be utilized at different rates depending on the proximity of the exercise power output to CP. Traditionally, this two-parameter CP model has been employed to provide insights into physiological responses, fatigue mechanisms, and performance capacity during continuous constant power output exercise in discrete exercise intensity domains. However, many team sports (e.g., basketball, football, hockey, rugby) involve frequent changes in exercise intensity and, even in endurance sports (e.g., cycling, running), intensity may vary considerably with environmental/course conditions and pacing strategy. In recent years, the appeal of the CP concept has been broadened through its application to intermittent high-intensity exercise. With the assumptions that W' is utilized during work intervals above CP and reconstituted during recovery intervals below CP, it can be shown that performance during intermittent exercise is related to four factors: the intensity and duration of the work intervals and the intensity and duration of the recovery intervals. However, while the utilization of W' may be assumed to be linear, studies indicate that the reconstitution of W' may be curvilinear with kinetics that are highly variable between individuals. This has led to the development of a new CP model for intermittent exercise in which the balance of W' remaining ([Formula: see text]) may be calculated with greater accuracy. Field trials of athletes performing stochastic exercise indicate that this [Formula: see text] model can accurately predict the time at which W' tends to zero and exhaustion is imminent. The [Formula: see text] model potentially has important applications in the real-time monitoring of athlete fatigue progression in endurance and team sports, which may inform tactics and influence pacing strategy.

  14. Evaluating the effectiveness of intercultural teachers.

    PubMed

    Cox, Kathleen

    2011-01-01

    With globalization and major immigration flows, intercultural teaching encounters are likely to increase, along with the need to assure intercultural teaching effectiveness.Thus, the purpose of this article is to present a conceptual framework for nurse educators to consider when anticipating an intercultural teaching experience. Kirkpatrick's and Bushnell's models provide a basis for the conceptual framework. Major concepts of the model include input, process, output, and outcome.The model may possibly be used to guide future research to determine which variables are most influential in explaining intercultural teaching effectiveness.

  15. Multi-model blending

    DOEpatents

    Hamann, Hendrik F.; Hwang, Youngdeok; van Kessel, Theodore G.; Khabibrakhmanov, Ildar K.; Muralidhar, Ramachandran

    2016-10-18

    A method and a system to perform multi-model blending are described. The method includes obtaining one or more sets of predictions of historical conditions, the historical conditions corresponding with a time T that is historical in reference to current time, and the one or more sets of predictions of the historical conditions being output by one or more models. The method also includes obtaining actual historical conditions, the actual historical conditions being measured conditions at the time T, assembling a training data set including designating the two or more set of predictions of historical conditions as predictor variables and the actual historical conditions as response variables, and training a machine learning algorithm based on the training data set. The method further includes obtaining a blended model based on the machine learning algorithm.

  16. Dynamic network data envelopment analysis for university hospitals evaluation

    PubMed Central

    Lobo, Maria Stella de Castro; Rodrigues, Henrique de Castro; André, Edgard Caires Gazzola; de Azeredo, Jônatas Almeida; Lins, Marcos Pereira Estellita

    2016-01-01

    ABSTRACT OBJECTIVE To develop an assessment tool to evaluate the efficiency of federal university general hospitals. METHODS Data envelopment analysis, a linear programming technique, creates a best practice frontier by comparing observed production given the amount of resources used. The model is output-oriented and considers variable returns to scale. Network data envelopment analysis considers link variables belonging to more than one dimension (in the model, medical residents, adjusted admissions, and research projects). Dynamic network data envelopment analysis uses carry-over variables (in the model, financing budget) to analyze frontier shift in subsequent years. Data were gathered from the information system of the Brazilian Ministry of Education (MEC), 2010-2013. RESULTS The mean scores for health care, teaching and research over the period were 58.0%, 86.0%, and 61.0%, respectively. In 2012, the best performance year, for all units to reach the frontier it would be necessary to have a mean increase of 65.0% in outpatient visits; 34.0% in admissions; 12.0% in undergraduate students; 13.0% in multi-professional residents; 48.0% in graduate students; 7.0% in research projects; besides a decrease of 9.0% in medical residents. In the same year, an increase of 0.9% in financing budget would be necessary to improve the care output frontier. In the dynamic evaluation, there was progress in teaching efficiency, oscillation in medical care and no variation in research. CONCLUSIONS The proposed model generates public health planning and programming parameters by estimating efficiency scores and making projections to reach the best practice frontier. PMID:27191158

  17. Improved Accuracy of Automated Estimation of Cardiac Output Using Circulation Time in Patients with Heart Failure.

    PubMed

    Dajani, Hilmi R; Hosokawa, Kazuya; Ando, Shin-Ichi

    2016-11-01

    Lung-to-finger circulation time of oxygenated blood during nocturnal periodic breathing in heart failure patients measured using polysomnography correlates negatively with cardiac function but possesses limited accuracy for cardiac output (CO) estimation. CO was recalculated from lung-to-finger circulation time using a multivariable linear model with information on age and average overnight heart rate in 25 patients who underwent evaluation of heart failure. The multivariable model decreased the percentage error to 22.3% relative to invasive CO measured during cardiac catheterization. This improved automated noninvasive CO estimation using multiple variables meets a recently proposed performance criterion for clinical acceptability of noninvasive CO estimation, and compares very favorably with other available methods. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  19. Interactive Correlation Analysis and Visualization of Climate Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ma, Kwan-Liu

    The relationship between our ability to analyze and extract insights from visualization of climate model output and the capability of the available resources to make those visualizations has reached a crisis point. The large volume of data currently produced by climate models is overwhelming the current, decades-old visualization workflow. The traditional methods for visualizing climate output also have not kept pace with changes in the types of grids used, the number of variables involved, and the number of different simulations performed with a climate model or the feature-richness of high-resolution simulations. This project has developed new and faster methods formore » visualization in order to get the most knowledge out of the new generation of high-resolution climate models. While traditional climate images will continue to be useful, there is need for new approaches to visualization and analysis of climate data if we are to gain all the insights available in ultra-large data sets produced by high-resolution model output and ensemble integrations of climate models such as those produced for the Coupled Model Intercomparison Project. Towards that end, we have developed new visualization techniques for performing correlation analysis. We have also introduced highly scalable, parallel rendering methods for visualizing large-scale 3D data. This project was done jointly with climate scientists and visualization researchers at Argonne National Laboratory and NCAR.« less

  20. A novel model incorporating two variability sources for describing motor evoked potentials

    PubMed Central

    Goetz, Stefan M.; Luber, Bruce; Lisanby, Sarah H.; Peterchev, Angel V.

    2014-01-01

    Objective Motor evoked potentials (MEPs) play a pivotal role in transcranial magnetic stimulation (TMS), e.g., for determining the motor threshold and probing cortical excitability. Sampled across the range of stimulation strengths, MEPs outline an input–output (IO) curve, which is often used to characterize the corticospinal tract. More detailed understanding of the signal generation and variability of MEPs would provide insight into the underlying physiology and aid correct statistical treatment of MEP data. Methods A novel regression model is tested using measured IO data of twelve subjects. The model splits MEP variability into two independent contributions, acting on both sides of a strong sigmoidal nonlinearity that represents neural recruitment. Traditional sigmoidal regression with a single variability source after the nonlinearity is used for comparison. Results The distribution of MEP amplitudes varied across different stimulation strengths, violating statistical assumptions in traditional regression models. In contrast to the conventional regression model, the dual variability source model better described the IO characteristics including phenomena such as changing distribution spread and skewness along the IO curve. Conclusions MEP variability is best described by two sources that most likely separate variability in the initial excitation process from effects occurring later on. The new model enables more accurate and sensitive estimation of the IO curve characteristics, enhancing its power as a detection tool, and may apply to other brain stimulation modalities. Furthermore, it extracts new information from the IO data concerning the neural variability—information that has previously been treated as noise. PMID:24794287

  1. Matching experimental and three dimensional numerical models for structural vibration problems with uncertainties

    NASA Astrophysics Data System (ADS)

    Langer, P.; Sepahvand, K.; Guist, C.; Bär, J.; Peplow, A.; Marburg, S.

    2018-03-01

    The simulation model which examines the dynamic behavior of real structures needs to address the impact of uncertainty in both geometry and material parameters. This article investigates three-dimensional finite element models for structural dynamics problems with respect to both model and parameter uncertainties. The parameter uncertainties are determined via laboratory measurements on several beam-like samples. The parameters are then considered as random variables to the finite element model for exploring the uncertainty effects on the quality of the model outputs, i.e. natural frequencies. The accuracy of the output predictions from the model is compared with the experimental results. To this end, the non-contact experimental modal analysis is conducted to identify the natural frequency of the samples. The results show a good agreement compared with experimental data. Furthermore, it is demonstrated that geometrical uncertainties have more influence on the natural frequencies compared to material parameters and material uncertainties are about two times higher than geometrical uncertainties. This gives valuable insights for improving the finite element model due to various parameter ranges required in a modeling process involving uncertainty.

  2. Comparison of the theoretical and real-world evolutionary potential of a genetic circuit

    NASA Astrophysics Data System (ADS)

    Razo-Mejia, M.; Boedicker, J. Q.; Jones, D.; DeLuna, A.; Kinney, J. B.; Phillips, R.

    2014-04-01

    With the development of next-generation sequencing technologies, many large scale experimental efforts aim to map genotypic variability among individuals. This natural variability in populations fuels many fundamental biological processes, ranging from evolutionary adaptation and speciation to the spread of genetic diseases and drug resistance. An interesting and important component of this variability is present within the regulatory regions of genes. As these regions evolve, accumulated mutations lead to modulation of gene expression, which may have consequences for the phenotype. A simple model system where the link between genetic variability, gene regulation and function can be studied in detail is missing. In this article we develop a model to explore how the sequence of the wild-type lac promoter dictates the fold-change in gene expression. The model combines single-base pair resolution maps of transcription factor and RNA polymerase binding energies with a comprehensive thermodynamic model of gene regulation. The model was validated by predicting and then measuring the variability of lac operon regulation in a collection of natural isolates. We then implement the model to analyze the sensitivity of the promoter sequence to the regulatory output, and predict the potential for regulation to evolve due to point mutations in the promoter region.

  3. An exact algebraic solution of the infimum in H-infinity optimization with output feedback

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi

    1991-01-01

    This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.

  4. Selective visual scaling of time-scale processes facilitates broadband learning of isometric force frequency tracking.

    PubMed

    King, Adam C; Newell, Karl M

    2015-10-01

    The experiment investigated the effect of selectively augmenting faster time scales of visual feedback information on the learning and transfer of continuous isometric force tracking tasks to test the generality of the self-organization of 1/f properties of force output. Three experimental groups tracked an irregular target pattern either under a standard fixed gain condition or with selectively enhancement in the visual feedback display of intermediate (4-8 Hz) or high (8-12 Hz) frequency components of the force output. All groups reduced tracking error over practice, with the error lowest in the intermediate scaling condition followed by the high scaling and fixed gain conditions, respectively. Selective visual scaling induced persistent changes across the frequency spectrum, with the strongest effect in the intermediate scaling condition and positive transfer to novel feedback displays. The findings reveal an interdependence of the timescales in the learning and transfer of isometric force output frequency structures consistent with 1/f process models of the time scales of motor output variability.

  5. Cardiac surgery productivity and throughput improvements.

    PubMed

    Lehtonen, Juha-Matti; Kujala, Jaakko; Kouri, Juhani; Hippeläinen, Mikko

    2007-01-01

    The high variability in cardiac surgery length--is one of the main challenges for staff managing productivity. This study aims to evaluate the impact of six interventions on open-heart surgery operating theatre productivity. A discrete operating theatre event simulation model with empirical operation time input data from 2603 patients is used to evaluate the effect that these process interventions have on the surgery output and overtime work. A linear regression model was used to get operation time forecasts for surgery scheduling while it also could be used to explain operation time. A forecasting model based on the linear regression of variables available before the surgery explains 46 per cent operating time variance. The main factors influencing operation length were type of operation, redoing the operation and the head surgeon. Reduction of changeover time between surgeries by inducing anaesthesia outside an operating theatre and by reducing slack time at the end of day after a second surgery have the strongest effects on surgery output and productivity. A more accurate operation time forecast did not have any effect on output, although improved operation time forecast did decrease overtime work. A reduction in the operation time itself is not studied in this article. However, the forecasting model can also be applied to discover which factors are most significant in explaining variation in the length of open-heart surgery. The challenge in scheduling two open-heart surgeries in one day can be partly resolved by increasing the length of the day, decreasing the time between two surgeries or by improving patient scheduling procedures so that two short surgeries can be paired. A linear regression model is created in the paper to increase the accuracy of operation time forecasting and to identify factors that have the most influence on operation time. A simulation model is used to analyse the impact of improved surgical length forecasting and five selected process interventions on productivity in cardiac surgery.

  6. Comparison of Malaria Simulations Driven by Meteorological Observations and Reanalysis Products in Senegal.

    PubMed

    Diouf, Ibrahima; Rodriguez-Fonseca, Belen; Deme, Abdoulaye; Caminade, Cyril; Morse, Andrew P; Cisse, Moustapha; Sy, Ibrahima; Dia, Ibrahima; Ermert, Volker; Ndione, Jacques-André; Gaye, Amadou Thierno

    2017-09-25

    The analysis of the spatial and temporal variability of climate parameters is crucial to study the impact of climate-sensitive vector-borne diseases such as malaria. The use of malaria models is an alternative way of producing potential malaria historical data for Senegal due to the lack of reliable observations for malaria outbreaks over a long time period. Consequently, here we use the Liverpool Malaria Model (LMM), driven by different climatic datasets, in order to study and validate simulated malaria parameters over Senegal. The findings confirm that the risk of malaria transmission is mainly linked to climate variables such as rainfall and temperature as well as specific landscape characteristics. For the whole of Senegal, a lag of two months is generally observed between the peak of rainfall in August and the maximum number of reported malaria cases in October. The malaria transmission season usually takes place from September to November, corresponding to the second peak of temperature occurring in October. Observed malaria data from the Programme National de Lutte contre le Paludisme (PNLP, National Malaria control Programme in Senegal) and outputs from the meteorological data used in this study were compared. The malaria model outputs present some consistencies with observed malaria dynamics over Senegal, and further allow the exploration of simulations performed with reanalysis data sets over a longer time period. The simulated malaria risk significantly decreased during the 1970s and 1980s over Senegal. This result is consistent with the observed decrease of malaria vectors and malaria cases reported by field entomologists and clinicians in the literature. The main differences between model outputs and observations regard amplitude, but can be related not only to reanalysis deficiencies but also to other environmental and socio-economic factors that are not included in this mechanistic malaria model framework. The present study can be considered as a validation of the reliability of reanalysis to be used as inputs for the calculation of malaria parameters in the Sahel using dynamical malaria models.

  7. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  8. Spectral Generation from the Ames Mars GCM for the Study of Martian Clouds

    NASA Astrophysics Data System (ADS)

    Klassen, David R.; Kahre, Melinda A.; Wolff, Michael J.; Haberle, Robert; Hollingsworth, Jeffery L.

    2017-10-01

    Studies of martian clouds come from two distinct groups of researchers: those modeling the martian system from first principles and those observing Mars from ground-based and orbital platforms. The model-view begins with global circulation models (GCMs) or mesoscale models to track a multitude of state variables over a prescribed set of spatial and temporal resolutions. The state variables can then be processed into distinct maps of derived product variables, such as integrated optical depth of aerosol (e.g., water ice cloud, dust) or column integrated water vapor for comparison to observational results. The observer view begins, typically, with spectral images or imaging spectra, calibrated to some form of absolute units then run through some form of radiative transfer model to also produce distinct maps of derived product variables. Both groups of researchers work to adjust model parameters and assumptions until some level of agreement in derived product variables is achieved. While this system appears to work well, it is in some sense only an implicit confirmation of the model assumptions that attribute to the work from both sides. We have begun a project of testing the NASA Ames Mars GCM and key aerosol model assumptions more directly by taking the model output and creating synthetic TES-spectra from them for comparison to actual raw-reduced TES spectra. We will present some preliminary generated GCM spectra and TES comparisons.

  9. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP.

    PubMed

    Deng, Li; Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency.

  10. Operating Comfort Prediction Model of Human-Machine Interface Layout for Cabin Based on GEP

    PubMed Central

    Wang, Guohua; Chen, Bo

    2015-01-01

    In view of the evaluation and decision-making problem of human-machine interface layout design for cabin, the operating comfort prediction model is proposed based on GEP (Gene Expression Programming), using operating comfort to evaluate layout scheme. Through joint angles to describe operating posture of upper limb, the joint angles are taken as independent variables to establish the comfort model of operating posture. Factor analysis is adopted to decrease the variable dimension; the model's input variables are reduced from 16 joint angles to 4 comfort impact factors, and the output variable is operating comfort score. The Chinese virtual human body model is built by CATIA software, which will be used to simulate and evaluate the operators' operating comfort. With 22 groups of evaluation data as training sample and validation sample, GEP algorithm is used to obtain the best fitting function between the joint angles and the operating comfort; then, operating comfort can be predicted quantitatively. The operating comfort prediction result of human-machine interface layout of driller control room shows that operating comfort prediction model based on GEP is fast and efficient, it has good prediction effect, and it can improve the design efficiency. PMID:26448740

  11. Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models

    NASA Astrophysics Data System (ADS)

    Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria

    2017-08-01

    In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.

  12. Extending existing structural identifiability analysis methods to mixed-effects models.

    PubMed

    Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D

    2018-01-01

    The concept of structural identifiability for state-space models is expanded to cover mixed-effects state-space models. Two methods applicable for the analytical study of the structural identifiability of mixed-effects models are presented. The two methods are based on previously established techniques for non-mixed-effects models; namely the Taylor series expansion and the input-output form approach. By generating an exhaustive summary, and by assuming an infinite number of subjects, functions of random variables can be derived which in turn determine the distribution of the system's observation function(s). By considering the uniqueness of the analytical statistical moments of the derived functions of the random variables, the structural identifiability of the corresponding mixed-effects model can be determined. The two methods are applied to a set of examples of mixed-effects models to illustrate how they work in practice. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Modeling a multivariable reactor and on-line model predictive control.

    PubMed

    Yu, D W; Yu, D L

    2005-10-01

    A nonlinear first principle model is developed for a laboratory-scaled multivariable chemical reactor rig in this paper and the on-line model predictive control (MPC) is implemented to the rig. The reactor has three variables-temperature, pH, and dissolved oxygen with nonlinear dynamics-and is therefore used as a pilot system for the biochemical industry. A nonlinear discrete-time model is derived for each of the three output variables and their model parameters are estimated from the real data using an adaptive optimization method. The developed model is used in a nonlinear MPC scheme. An accurate multistep-ahead prediction is obtained for MPC, where the extended Kalman filter is used to estimate system unknown states. The on-line control is implemented and a satisfactory tracking performance is achieved. The MPC is compared with three decentralized PID controllers and the advantage of the nonlinear MPC over the PID is clearly shown.

  14. The Etiology of Top-Tier Publications in Management: A Status Attainment Perspective on Academic Career Success

    ERIC Educational Resources Information Center

    Valle, Matthew; Schultz, Kaitlyn

    2011-01-01

    Purpose: The purpose of this paper is to develop and test a comprehensive model of personal and institutional input variables, composed of elements describing status-based antecedents, job/organizational context antecedents, and individual level antecedents, which may contribute to the production of significant (top-tier) research outputs in the…

  15. Effect of plasma arc welding variables on fusion zone grain size and hardness of AISI 321 austenitic stainless steel

    NASA Astrophysics Data System (ADS)

    Kondapalli, S. P.

    2017-12-01

    In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.

  16. A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.

    PubMed

    Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P

    2014-12-15

    Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.

  17. Statistical downscaling of mean temperature, maximum temperature, and minimum temperature on the Loess Plateau, China

    NASA Astrophysics Data System (ADS)

    Lin, Jiang; Miao, Chiyuan

    2017-04-01

    Climate change is considered to be one of the greatest environmental threats. This has urged scientific communities to focus on the hot topic. Global climate models (GCMs) are the primary tool used for studying climate change. However, GCMs are limited because of their coarse spatial resolution and inability to resolve important sub-grid scale features such as terrain and clouds. Statistical downscaling methods can be used to downscale large-scale variables to local-scale. In this study, we assess the applicability of the widely used Statistical Downscaling Model (SDSM) for the Loess Plateau, China. The observed variables included daily mean temperature (TMEAN), maximum temperature (TMAX) and minimum temperature (TMIN) from 1961 to 2005. The and the daily atmospheric data were taken from reanalysis data from 1961 to 2005, and global climate model outputs from Beijing Normal University Earth System Model (BNU-ESM) from 1961 to 2099 and from observations . The results show that SDSM performs well for these three climatic variables on the Loess Plateau. After downscaling, the root mean square errors for TMEAN, TMAX, TMIN for BNU-ESM were reduced by 70.9%, 75.1%, and 67.2%, respectively. All the rates of change in TMEAN, TMAX and TMIN during the 21st century decreased after SDSM downscaling. We also show that SDSM can effectively reduce uncertainty, compared with the raw model outputs. TMEAN uncertainty was reduced by 27.1%, 26.8%, and 16.3% for the future scenarios of RCP 2.6, RCP 4.5 and RCP 8.5, respectively. The corresponding reductions in uncertainty were 23.6%, 30.7%, and 18.7% for TMAX, ; and 37.6%, 31.8%, and 23.2% for TMIN.

  18. Investigation on the possibility of extracting wave energy from the Texas coast

    NASA Astrophysics Data System (ADS)

    Haces-Fernandez, Francisco

    Due to the great and growing demand of energy consumption in the Texas Coast area, the generation of electricity from ocean waves is considered very important. The combination of the wave energy with offshore wind power is explored as a way to increase power output, obtain synergies, maximize the utilization of assigned marine zones and reduce variability. Previously literature has assessed the wave energy generation, combined with wind in different geographic locations such as California, Ireland and the Azores Island. In this research project, the electric power generation from ocean waves on the Texas Coast was investigated, assessing its potential from the meteorological data provided by five buoys from National Data Buoy Center of the National Oceanic and Atmospheric Administration, considering the Pelamis 750 kW Wave Energy Converter (WEC) and the Vesta V90 3 MW Wind Turbine. The power output from wave energy was calculated for the year 2006 using Matlab, and the results in several locations were considered acceptable in terms of total power output, but with a high temporal variability. To reduce its variability, wave energy was combined with wind energy, obtaining a significant reduction on the coefficient of variation on the power output. A Matlab based interface was created to calculate power output and its variability considering data from longer periods of time.

  19. Probabilistic Predictions of PM2.5 Using a Novel Ensemble Design for the NAQFC

    NASA Astrophysics Data System (ADS)

    Kumar, R.; Lee, J. A.; Delle Monache, L.; Alessandrini, S.; Lee, P.

    2017-12-01

    Poor air quality (AQ) in the U.S. is estimated to cause about 60,000 premature deaths with costs of 100B-150B annually. To reduce such losses, the National AQ Forecasting Capability (NAQFC) at the National Oceanic and Atmospheric Administration (NOAA) produces forecasts of ozone, particulate matter less than 2.5 mm in diameter (PM2.5), and other pollutants so that advance notice and warning can be issued to help individuals and communities limit the exposure and reduce air pollution-caused health problems. The current NAQFC, based on the U.S. Environmental Protection Agency Community Multi-scale AQ (CMAQ) modeling system, provides only deterministic AQ forecasts and does not quantify the uncertainty associated with the predictions, which could be large due to the chaotic nature of atmosphere and nonlinearity in atmospheric chemistry. This project aims to take NAQFC a step further in the direction of probabilistic AQ prediction by exploring and quantifying the potential value of ensemble predictions of PM2.5, and perturbing three key aspects of PM2.5 modeling: the meteorology, emissions, and CMAQ secondary organic aerosol formulation. This presentation focuses on the impact of meteorological variability, which is represented by three members of NOAA's Short-Range Ensemble Forecast (SREF) system that were down-selected by hierarchical cluster analysis. These three SREF members provide the physics configurations and initial/boundary conditions for the Weather Research and Forecasting (WRF) model runs that generate required output variables for driving CMAQ that are missing in operational SREF output. We conducted WRF runs for Jan, Apr, Jul, and Oct 2016 to capture seasonal changes in meteorology. Estimated emissions of trace gases and aerosols via the Sparse Matrix Operator Kernel (SMOKE) system were developed using the WRF output. WRF and SMOKE output drive a 3-member CMAQ mini-ensemble of once-daily, 48-h PM2.5 forecasts for the same four months. The CMAQ mini-ensemble is evaluated against both observations and the current operational deterministic NAQFC products, and analyzed to assess the impact of meteorological biases on PM2.5 variability. Quantification of the PM2.5 prediction uncertainty will prove a key factor to support cost-effective decision-making while protecting public health.

  20. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  1. Quantifying Uncertainty in Flood Inundation Mapping Using Streamflow Ensembles and Multiple Hydraulic Modeling Techniques

    NASA Astrophysics Data System (ADS)

    Hosseiny, S. M. H.; Zarzar, C.; Gomez, M.; Siddique, R.; Smith, V.; Mejia, A.; Demir, I.

    2016-12-01

    The National Water Model (NWM) provides a platform for operationalize nationwide flood inundation forecasting and mapping. The ability to model flood inundation on a national scale will provide invaluable information to decision makers and local emergency officials. Often, forecast products use deterministic model output to provide a visual representation of a single inundation scenario, which is subject to uncertainty from various sources. While this provides a straightforward representation of the potential inundation, the inherent uncertainty associated with the model output should be considered to optimize this tool for decision making support. The goal of this study is to produce ensembles of future flood inundation conditions (i.e. extent, depth, and velocity) to spatially quantify and visually assess uncertainties associated with the predicted flood inundation maps. The setting for this study is located in a highly urbanized watershed along the Darby Creek in Pennsylvania. A forecasting framework coupling the NWM with multiple hydraulic models was developed to produce a suite ensembles of future flood inundation predictions. Time lagged ensembles from the NWM short range forecasts were used to account for uncertainty associated with the hydrologic forecasts. The forecasts from the NWM were input to iRIC and HEC-RAS two-dimensional software packages, from which water extent, depth, and flow velocity were output. Quantifying the agreement between output ensembles for each forecast grid provided the uncertainty metrics for predicted flood water inundation extent, depth, and flow velocity. For visualization, a series of flood maps that display flood extent, water depth, and flow velocity along with the underlying uncertainty associated with each of the forecasted variables were produced. The results from this study demonstrate the potential to incorporate and visualize model uncertainties in flood inundation maps in order to identify the high flood risk zones.

  2. Cascaded resonant bridge converters

    NASA Technical Reports Server (NTRS)

    Stuart, Thomas A. (Inventor)

    1989-01-01

    A converter for converting a low voltage direct current power source to a higher voltage, high frequency alternating current output for use in an electrical system where it is desired to use low weight cables and other circuit elements. The converter has a first stage series resonant (Schwarz) converter which converts the direct current power source to an alternating current by means of switching elements that are operated by a variable frequency voltage regulator, a transformer to step up the voltage of the alternating current, and a rectifier bridge to convert the alternating current to a direct current first stage output. The converter further has a second stage series resonant (Schwarz) converter which is connected in series to the first stage converter to receive its direct current output and convert it to a second stage high frequency alternating current output by means of switching elements that are operated by a fixed frequency oscillator. The voltage of the second stage output is controlled at a relatively constant value by controlling the first stage output voltage, which is accomplished by controlling the frequency of the first stage variable frequency voltage controller in response to second stage voltage. Fault tolerance in the event of a load short circuit is provided by making the operation of the first stage variable frequency voltage controller responsive to first and second stage current limiting devices. The second stage output is connected to a rectifier bridge whose output is connected to the input of the second stage to provide good regulation of output voltage wave form at low system loads.

  3. Effect of workload setting on propulsion technique in handrim wheelchair propulsion.

    PubMed

    van Drongelen, Stefan; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V

    2013-03-01

    To investigate the influence of workload setting (speed at constant power, method to impose power) on the propulsion technique (i.e. force and timing characteristics) in handrim wheelchair propulsion. Twelve able-bodied men participated in this study. External forces were measured during handrim wheelchair propulsion on a motor driven treadmill at different velocities and constant power output (to test the forced effect of speed) and at power outputs imposed by incline vs. pulley system (to test the effect of method to impose power). Outcome measures were the force and timing variables of the propulsion technique. FEF and timing variables showed significant differences between the speed conditions when propelling at the same power output (p < 0.01). Push time was reduced while push angle increased. The method to impose power only showed slight differences in the timing variables, however not in the force variables. Researchers and clinicians must be aware of testing and evaluation conditions that may differently affect propulsion technique parameters despite an overall constant power output. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  4. Device for adapting continuously variable transmissions to infinitely variable transmissions with forward-neutral-reverse capabilities

    DOEpatents

    Wilkes, Donald F.; Purvis, James W.; Miller, A. Keith

    1997-01-01

    An infinitely variable transmission is capable of operating between a maximum speed in one direction and a minimum speed in an opposite direction, including a zero output angular velocity, while being supplied with energy at a constant angular velocity. Input energy is divided between a first power path carrying an orbital set of elements and a second path that includes a variable speed adjustment mechanism. The second power path also connects with the orbital set of elements in such a way as to vary the rate of angular rotation thereof. The combined effects of power from the first and second power paths are combined and delivered to an output element by the orbital element set. The transmission can be designed to operate over a preselected ratio of forward to reverse output speeds.

  5. Stimulation of Respiratory Motor Output and Ventilation in a Murine Model of Pompe Disease by Ampakines.

    PubMed

    ElMallah, Mai K; Pagliardini, Silvia; Turner, Sara M; Cerreta, Anthony J; Falk, Darin J; Byrne, Barry J; Greer, John J; Fuller, David D

    2015-09-01

    Pompe disease results from a mutation in the acid α-glucosidase gene leading to lysosomal glycogen accumulation. Respiratory insufficiency is common, and the current U.S. Food and Drug Administration-approved treatment, enzyme replacement, has limited effectiveness. Ampakines are drugs that enhance α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptor responses and can increase respiratory motor drive. Recent work indicates that respiratory motor drive can be blunted in Pompe disease, and thus pharmacologic stimulation of breathing may be beneficial. Using a murine Pompe model with the most severe clinical genotype (the Gaa(-/-) mouse), our primary objective was to test the hypothesis that ampakines can stimulate respiratory motor output and increase ventilation. Our second objective was to confirm that neuropathology was present in Pompe mouse medullary respiratory control neurons. The impact of ampakine CX717 on breathing was determined via phrenic and hypoglossal nerve recordings in anesthetized mice and whole-body plethysmography in unanesthetized mice. The medulla was examined using standard histological methods coupled with immunochemical markers of respiratory control neurons. Ampakine CX717 robustly increased phrenic and hypoglossal inspiratory bursting and reduced respiratory cycle variability in anesthetized Pompe mice, and it increased inspiratory tidal volume in unanesthetized Pompe mice. CX717 did not significantly alter these variables in wild-type mice. Medullary respiratory neurons showed extensive histopathology in Pompe mice. Ampakines stimulate respiratory neuromotor output and ventilation in Pompe mice, and therefore they have potential as an adjunctive therapy in Pompe disease.

  6. A Dasymetric-Based Monte Carlo Simulation Approach to the Probabilistic Analysis of Spatial Variables

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morton, April M; Piburn, Jesse O; McManamay, Ryan A

    2017-01-01

    Monte Carlo simulation is a popular numerical experimentation technique used in a range of scientific fields to obtain the statistics of unknown random output variables. Despite its widespread applicability, it can be difficult to infer required input probability distributions when they are related to population counts unknown at desired spatial resolutions. To overcome this challenge, we propose a framework that uses a dasymetric model to infer the probability distributions needed for a specific class of Monte Carlo simulations which depend on population counts.

  7. Camera Traps Can Be Heard and Seen by Animals

    PubMed Central

    Meek, Paul D.; Ballard, Guy-Anthony; Fleming, Peter J. S.; Schaefer, Michael; Williams, Warwick; Falzon, Greg

    2014-01-01

    Camera traps are electrical instruments that emit sounds and light. In recent decades they have become a tool of choice in wildlife research and monitoring. The variability between camera trap models and the methods used are considerable, and little is known about how animals respond to camera trap emissions. It has been reported that some animals show a response to camera traps, and in research this is often undesirable so it is important to understand why the animals are disturbed. We conducted laboratory based investigations to test the audio and infrared optical outputs of 12 camera trap models. Camera traps were measured for audio outputs in an anechoic chamber; we also measured ultrasonic (n = 5) and infrared illumination outputs (n = 7) of a subset of the camera trap models. We then compared the perceptive hearing range (n = 21) and assessed the vision ranges (n = 3) of mammals species (where data existed) to determine if animals can see and hear camera traps. We report that camera traps produce sounds that are well within the perceptive range of most mammals’ hearing and produce illumination that can be seen by many species. PMID:25354356

  8. Evaluating the performance of a fault detection and diagnostic system for vapor compression equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Breuker, M.S.; Braun, J.E.

    This paper presents a detailed evaluation of the performance of a statistical, rule-based fault detection and diagnostic (FDD) technique presented by Rossi and Braun (1997). Steady-state and transient tests were performed on a simple rooftop air conditioner over a range of conditions and fault levels. The steady-state data without faults were used to train models that predict outputs for normal operation. The transient data with faults were used to evaluate FDD performance. The effect of a number of design variables on FDD sensitivity for different faults was evaluated and two prototype systems were specified for more complete evaluation. Good performancemore » was achieved in detecting and diagnosing five faults using only six temperatures (2 input and 4 output) and linear models. The performance improved by about a factor of two when ten measurements (three input and seven output) and higher order models were used. This approach for evaluating and optimizing the performance of the statistical, rule-based FDD technique could be used as a design and evaluation tool when applying this FDD method to other packaged air-conditioning systems. Furthermore, the approach could also be modified to evaluate the performance of other FDD methods.« less

  9. Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool

    NASA Astrophysics Data System (ADS)

    Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.

    2018-06-01

    Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.

  10. Assessment of Effectiveness of Geologic Isolation Systems. Variable thickness transient ground-water flow model. Volume 2. Users' manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reisenauer, A.E.

    1979-12-01

    A system of computer codes to aid in the preparation and evaluation of ground-water model input, as well as in the computer codes and auxillary programs developed and adapted for use in modeling major ground-water aquifers is described. The ground-water model is interactive, rather than a batch-type model. Interactive models have been demonstrated to be superior to batch in the ground-water field. For example, looking through reams of numerical lists can be avoided with the much superior graphical output forms or summary type numerical output. The system of computer codes permits the flexibility to develop rapidly the model-required data filesmore » from engineering data and geologic maps, as well as efficiently manipulating the voluminous data generated. Central to these codes is the Ground-water Model, which given the boundary value problem, produces either the steady-state or transient time plane solutions. A sizeable part of the codes available provide rapid evaluation of the results. Besides contouring the new water potentials, the model allows graphical review of streamlines of flow, travel times, and detailed comparisons of surfaces or points at designated wells. Use of the graphics scopes provide immediate, but temporary displays which can be used for evaluation of input and output and which can be reproduced easily on hard copy devices, such as a line printer, Calcomp plotter and image photographs.« less

  11. Identifying human influences on atmospheric temperature

    PubMed Central

    Santer, Benjamin D.; Painter, Jeffrey F.; Mears, Carl A.; Doutriaux, Charles; Caldwell, Peter; Arblaster, Julie M.; Cameron-Smith, Philip J.; Gillett, Nathan P.; Gleckler, Peter J.; Lanzante, John; Perlwitz, Judith; Solomon, Susan; Stott, Peter A.; Taylor, Karl E.; Terray, Laurent; Thorne, Peter W.; Wehner, Michael F.; Wentz, Frank J.; Wigley, Tom M. L.; Wilcox, Laura J.; Zou, Cheng-Zhi

    2013-01-01

    We perform a multimodel detection and attribution study with climate model simulation output and satellite-based measurements of tropospheric and stratospheric temperature change. We use simulation output from 20 climate models participating in phase 5 of the Coupled Model Intercomparison Project. This multimodel archive provides estimates of the signal pattern in response to combined anthropogenic and natural external forcing (the fingerprint) and the noise of internally generated variability. Using these estimates, we calculate signal-to-noise (S/N) ratios to quantify the strength of the fingerprint in the observations relative to fingerprint strength in natural climate noise. For changes in lower stratospheric temperature between 1979 and 2011, S/N ratios vary from 26 to 36, depending on the choice of observational dataset. In the lower troposphere, the fingerprint strength in observations is smaller, but S/N ratios are still significant at the 1% level or better, and range from three to eight. We find no evidence that these ratios are spuriously inflated by model variability errors. After removing all global mean signals, model fingerprints remain identifiable in 70% of the tests involving tropospheric temperature changes. Despite such agreement in the large-scale features of model and observed geographical patterns of atmospheric temperature change, most models do not replicate the size of the observed changes. On average, the models analyzed underestimate the observed cooling of the lower stratosphere and overestimate the warming of the troposphere. Although the precise causes of such differences are unclear, model biases in lower stratospheric temperature trends are likely to be reduced by more realistic treatment of stratospheric ozone depletion and volcanic aerosol forcing. PMID:23197824

  12. Impact of temporal resolution of inputs on hydrological model performance: An analysis based on 2400 flood events

    NASA Astrophysics Data System (ADS)

    Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken

    2016-07-01

    Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.

  13. Modeling and predicting intertidal variations of the salinity field in the Bay/Delta

    USGS Publications Warehouse

    Knowles, Noah; Uncles, Reginald J.

    1995-01-01

    One approach to simulating daily to monthly variability in the bay is the development of intertidal model using tidally-averaged equations and a time step on the order of the day.  An intertidal numerical model of the bay's physics, capable of portraying seasonal and inter-annual variability, would have several uses.  Observations are limited in time and space, so simulation could help fill the gaps.  Also, the ability to simulate multi-year episodes (eg, an extended drought) could provide insight into the response of the ecosystem to such events.  Finally, such a model could be used in a forecast mode wherein predicted delta flow is used as model input, and predicted salinity distribution is output with estimates days and months in advance.  This note briefly introduces such a tidally-averaged model (Uncles and Peterson, in press) and a corresponding predictive scheme for baywide forecasting.

  14. Finding the Root Causes of Statistical Inconsistency in Community Earth System Model Output

    NASA Astrophysics Data System (ADS)

    Milroy, D.; Hammerling, D.; Baker, A. H.

    2017-12-01

    Baker et al (2015) developed the Community Earth System Model Ensemble Consistency Test (CESM-ECT) to provide a metric for software quality assurance by determining statistical consistency between an ensemble of CESM outputs and new test runs. The test has proved useful for detecting statistical difference caused by compiler bugs and errors in physical modules. However, detection is only the necessary first step in finding the causes of statistical difference. The CESM is a vastly complex model comprised of millions of lines of code which is developed and maintained by a large community of software engineers and scientists. Any root cause analysis is correspondingly challenging. We propose a new capability for CESM-ECT: identifying the sections of code that cause statistical distinguishability. The first step is to discover CESM variables that cause CESM-ECT to classify new runs as statistically distinct, which we achieve via Randomized Logistic Regression. Next we use a tool developed to identify CESM components that define or compute the variables found in the first step. Finally, we employ the application Kernel GENerator (KGEN) created in Kim et al (2016) to detect fine-grained floating point differences. We demonstrate an example of the procedure and advance a plan to automate this process in our future work.

  15. Motor output variability, deafferentation, and putative deficits in kinesthetic reafference in Parkinson's disease.

    PubMed

    Torres, Elizabeth B; Cole, Jonathan; Poizner, Howard

    2014-01-01

    Parkinson's disease (PD) is a neurodegenerative disorder defined by motor impairments that include rigidity, systemic slowdown of movement (bradykinesia), postural problems, and tremor. While the progressive decline in motor output functions is well documented, less understood are impairments linked to the continuous kinesthetic sensation emerging from the flow of motions. There is growing evidence in recent years that kinesthetic problems are also part of the symptoms of PD, but objective methods to readily quantify continuously unfolding motions across different contexts have been lacking. Here we present evidence from a deafferented subject (IW) and a new statistical platform that enables new analyses of motor output variability measured as a continuous flow of kinesthetic reafferent input. Systematic increasing similarities between the patterns of motor output variability in IW and the participants with increasing degrees of PD severity suggest potential deficits in kinesthetic sensing in PD. We propose that these deficits may result from persistent, noisy, and random motor patterns as the disorder progresses. The stochastic signatures from the unfolding motions revealed levels of noise in the motor output fluctuations of these patients bound to decrease the kinesthetic signal's bandwidth. The results are interpreted in light of the concept of kinesthetic reafference ( Von Holst and Mittelstaedt, 1950). In this context, noisy motor output variability from voluntary movements in PD leads to a returning stream of noisy afference caused, in turn, by those faulty movements themselves. Faulty efferent output re-enters the CNS as corrupted sensory motor input. We find here that severity level in PD leads to the persistence of such patterns, thus bringing the statistical signatures of the subjects with PD systematically closer to those of the subject without proprioception.

  16. Motor output variability, deafferentation, and putative deficits in kinesthetic reafference in Parkinson’s disease

    PubMed Central

    Torres, Elizabeth B.; Cole, Jonathan; Poizner, Howard

    2014-01-01

    Parkinson’s disease (PD) is a neurodegenerative disorder defined by motor impairments that include rigidity, systemic slowdown of movement (bradykinesia), postural problems, and tremor. While the progressive decline in motor output functions is well documented, less understood are impairments linked to the continuous kinesthetic sensation emerging from the flow of motions. There is growing evidence in recent years that kinesthetic problems are also part of the symptoms of PD, but objective methods to readily quantify continuously unfolding motions across different contexts have been lacking. Here we present evidence from a deafferented subject (IW) and a new statistical platform that enables new analyses of motor output variability measured as a continuous flow of kinesthetic reafferent input. Systematic increasing similarities between the patterns of motor output variability in IW and the participants with increasing degrees of PD severity suggest potential deficits in kinesthetic sensing in PD. We propose that these deficits may result from persistent, noisy, and random motor patterns as the disorder progresses. The stochastic signatures from the unfolding motions revealed levels of noise in the motor output fluctuations of these patients bound to decrease the kinesthetic signal’s bandwidth. The results are interpreted in light of the concept of kinesthetic reafference ( Von Holst and Mittelstaedt, 1950). In this context, noisy motor output variability from voluntary movements in PD leads to a returning stream of noisy afference caused, in turn, by those faulty movements themselves. Faulty efferent output re-enters the CNS as corrupted sensory motor input. We find here that severity level in PD leads to the persistence of such patterns, thus bringing the statistical signatures of the subjects with PD systematically closer to those of the subject without proprioception. PMID:25374524

  17. Multilayer perceptron neural network-based approach for modeling phycocyanin pigment concentrations: case study from lower Charles River buoy, USA.

    PubMed

    Heddam, Salim

    2016-09-01

    This paper proposes multilayer perceptron neural network (MLPNN) to predict phycocyanin (PC) pigment using water quality variables as predictor. In the proposed model, four water quality variables that are water temperature, dissolved oxygen, pH, and specific conductance were selected as the inputs for the MLPNN model, and the PC as the output. To demonstrate the capability and the usefulness of the MLPNN model, a total of 15,849 data measured at 15-min (15 min) intervals of time are used for the development of the model. The data are collected at the lower Charles River buoy, and available from the US Environmental Protection Agency (USEPA). For comparison purposes, a multiple linear regression (MLR) model that was frequently used for predicting water quality variables in previous studies is also built. The performances of the models are evaluated using a set of widely used statistical indices. The performance of the MLPNN and MLR models is compared with the measured data. The obtained results show that (i) the all proposed MLPNN models are more accurate than the MLR models and (ii) the results obtained are very promising and encouraging for the development of phycocyanin-predictive models.

  18. Application of Interval Predictor Models to Space Radiation Shielding

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy,Daniel P.; Norman, Ryan B.; Blattnig, Steve R.

    2016-01-01

    This paper develops techniques for predicting the uncertainty range of an output variable given input-output data. These models are called Interval Predictor Models (IPM) because they yield an interval valued function of the input. This paper develops IPMs having a radial basis structure. This structure enables the formal description of (i) the uncertainty in the models parameters, (ii) the predicted output interval, and (iii) the probability that a future observation would fall in such an interval. In contrast to other metamodeling techniques, this probabilistic certi cate of correctness does not require making any assumptions on the structure of the mechanism from which data are drawn. Optimization-based strategies for calculating IPMs having minimal spread while containing all the data are developed. Constraints for bounding the minimum interval spread over the continuum of inputs, regulating the IPMs variation/oscillation, and centering its spread about a target point, are used to prevent data over tting. Furthermore, we develop an approach for using expert opinion during extrapolation. This metamodeling technique is illustrated using a radiation shielding application for space exploration. In this application, we use IPMs to describe the error incurred in predicting the ux of particles resulting from the interaction between a high-energy incident beam and a target.

  19. Terminal Area Simulation System User's Guide - Version 10.0

    NASA Technical Reports Server (NTRS)

    Switzer, George F.; Proctor, Fred H.

    2014-01-01

    The Terminal Area Simulation System (TASS) is a three-dimensional, time-dependent, large eddy simulation model that has been developed for studies of wake vortex and weather hazards to aviation, along with other atmospheric turbulence, and cloud-scale weather phenomenology. This document describes the source code for TASS version 10.0 and provides users with needed documentation to run the model. The source code is programed in Fortran language and is formulated to take advantage of vector and efficient multi-processor scaling for execution on massively-parallel supercomputer clusters. The code contains different initialization modules allowing the study of aircraft wake vortex interaction with the atmosphere and ground, atmospheric turbulence, atmospheric boundary layers, precipitating convective clouds, hail storms, gust fronts, microburst windshear, supercell and mesoscale convective systems, tornadic storms, and ring vortices. The model is able to operate in either two- or three-dimensions with equations numerically formulated on a Cartesian grid. The primary output from the TASS is time-dependent domain fields generated by the prognostic equations and diagnosed variables. This document will enable a user to understand the general logic of TASS, and will show how to configure and initialize the model domain. Also described are the formats of the input and output files, as well as the parameters that control the input and output.

  20. Growth and food consumption by tiger muskellunge: Effects of temperature and ration level on bioenergetic model predictions

    USGS Publications Warehouse

    Chipps, S.R.; Einfalt, L.M.; Wahl, David H.

    2000-01-01

    We measured growth of age-0 tiger muskellunge as a function of ration size (25, 50, 75, and 100% C(max))and water temperature (7.5-25??C) and compared experimental results with those predicted from a bioenergetic model. Discrepancies between actual and predicted values varied appreciably with water temperature and growth rate. On average, model output overestimated winter consumption rates at 10 and 7.5??C by 113 to 328%, respectively, whereas model predictions in summer and autumn (20-25??C) were in better agreement with actual values (4 to 58%). We postulate that variation in model performance was related to seasonal changes in esocid metabolic rate, which were not accounted for in the bioenergetic model. Moreover, accuracy of model output varied with feeding and growth rate of tiger muskellunge. The model performed poorly for fish fed low rations compared with estimates based on fish fed ad libitum rations and was attributed, in part, to the influence of growth rate on the accuracy of bioenergetic predictions. Based on modeling simulations, we found that errors associated with bioenergetic parameters had more influence on model output when growth rate was low, which is consistent with our observations. In addition, reduced conversion efficiency at high ration levels may contribute to variable model performance, thereby implying that waste losses should be modeled as a function of ration size for esocids. Our findings support earlier field tests of the esocid bioenergetic model and indicate that food consumption is generally overestimated by the model, particularly in winter months and for fish exhibiting low feeding and growth rates.

  1. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  2. Development and Implementation of an Empirical Ionosphere Variability Model

    NASA Technical Reports Server (NTRS)

    Minow, Joesph I.; Almond, Deborah (Technical Monitor)

    2002-01-01

    Spacecraft designers and operations support personnel involved in space environment analysis for low Earth orbit missions require ionospheric specification and forecast models that provide not only average ionospheric plasma parameters for a given set of geophysical conditions but the statistical variations about the mean as well. This presentation describes the development of a prototype empirical model intended for use with the International Reference Ionosphere (IRI) to provide ionospheric Ne and Te variability. We first describe the database of on-orbit observations from a variety of spacecraft and ground based radars over a wide range of latitudes and altitudes used to obtain estimates of the environment variability. Next, comparison of the observations with the IRI model provide estimates of the deviations from the average model as well as the range of possible values that may correspond to a given IRI output. Options for implementation of the statistical variations in software that can be run with the IRI model are described. Finally, we provide example applications including thrust estimates for tethered satellites and specification of sunrise Ne, Te conditions required to support spacecraft charging issues for satellites with high voltage solar arrays.

  3. Effect of processing parameters on FDM process

    NASA Astrophysics Data System (ADS)

    Chari, V. Srinivasa; Venkatesh, P. R.; Krupashankar, Dinesh, Veena

    2018-04-01

    This paper focused on the process parameters on fused deposition modeling (FDM). Infill, resolution, temperature are the process variables considered for experimental studies. Compression strength, Hardness test microstructure are the outcome parameters, this experimental study done based on the taguchi's L9 orthogonal array is used. Taguchi array used to build the 9 different models and also to get the effective output results on the under taken parameters. The material used for this experimental study is Polylactic Acid (PLA).

  4. A comparative analysis of errors in long-term econometric forecasts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tepel, R.

    1986-04-01

    The growing body of literature that documents forecast accuracy falls generally into two parts. The first is prescriptive and is carried out by modelers who use simulation analysis as a tool for model improvement. These studies are ex post, that is, they make use of known values for exogenous variables and generate an error measure wholly attributable to the model. The second type of analysis is descriptive and seeks to measure errors, identify patterns among errors and variables and compare forecasts from different sources. Most descriptive studies use an ex ante approach, that is, they evaluate model outputs based onmore » estimated (or forecasted) exogenous variables. In this case, it is the forecasting process, rather than the model, that is under scrutiny. This paper uses an ex ante approach to measure errors in forecast series prepared by Data Resources Incorporated (DRI), Wharton Econometric Forecasting Associates (Wharton), and Chase Econometrics (Chase) and to determine if systematic patterns of errors can be discerned between services, types of variables (by degree of aggregation), length of forecast and time at which the forecast is made. Errors are measured as the percent difference between actual and forecasted values for the historical period of 1971 to 1983.« less

  5. Variability in soybean yield in Brazil stemming from the interaction of heterogeneous management and climate variability

    NASA Astrophysics Data System (ADS)

    Cohn, A.; Bragança, A.; Jeffries, G. R.

    2017-12-01

    An increasing share of global agricultural production can be found in the humid tropics. Therefore, an improved understanding of the mechanisms governing variability in the output of tropical agricultural systems is of increasing importance for food security including through climate change adaptation. Yet, the long window over which many tropical crops can be sown, the diversity of crop varieties and management practices combine to challenge inference into climate risk to cropping output in analyses of tropical crop-climate sensitivity employing administrative data. In this paper, we leverage a newly developed spatially explicit dataset of soybean yields in Brazil to combat this problem. The dataset was built by training a model of remotely-sensed vegetation index data and land cover classification data using a rich in situ dataset of soybean yield and management variables collected over the period 2006 to 2016. The dataset contains soybean yields by plant date, cropping frequency, and maturity group for each 5km grid cell in Brazil. We model variation in these yields using an approach enabling the estimation of the influence of management factors on the sensitivity of soybean yields to variability in: cumulative solar radiation, extreme degree days, growing degree days, flooding rain in the harvest period, and dry spells in the rainy season. We find strong variation in climate sensitivity by management class. Planting date and maturity group each explained a great deal more variation in yield sensitivity than did cropping frequency. Brazil collects comparatively fine spatial resolution yield data. But, our attempt to replicate our results using administrative soy yield data revealed substantially lesser crop-climate sensitivity; suggesting that previous analyses employing administrative data may have underestimated climate risk to tropical soy production.

  6. A Novel Degradation Identification Method for Wind Turbine Pitch System

    NASA Astrophysics Data System (ADS)

    Guo, Hui-Dong

    2018-04-01

    It’s difficult for traditional threshold value method to identify degradation of operating equipment accurately. An novel degradation evaluation method suitable for wind turbine condition maintenance strategy implementation was proposed in this paper. Based on the analysis of typical variable-speed pitch-to-feather control principle and monitoring parameters for pitch system, a multi input multi output (MIMO) regression model was applied to pitch system, where wind speed, power generation regarding as input parameters, wheel rotation speed, pitch angle and motor driving currency for three blades as output parameters. Then, the difference between the on-line measurement and the calculated value from the MIMO regression model applying least square support vector machines (LSSVM) method was defined as the Observed Vector of the system. The Gaussian mixture model (GMM) was applied to fitting the distribution of the multi dimension Observed Vectors. Applying the model established, the Degradation Index was calculated using the SCADA data of a wind turbine damaged its pitch bearing retainer and rolling body, which illustrated the feasibility of the provided method.

  7. Regenerative braking device with rotationally mounted energy storage means

    DOEpatents

    Hoppie, Lyle O.

    1982-03-16

    A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.

  8. Variability of pulsed energy outputs from three dermatology lasers during multiple simulated treatments.

    PubMed

    Britton, Jason

    2018-01-20

    Dermatology laser treatments are undertaken at regional departments using lasers of different powers and wavelengths. In order to achieve good outcomes, there needs to be good consistency of laser output across different weeks as it is custom and practice to break down the treatments into individual fractions. Departments will also collect information from test patches to help decide on the most appropriate treatment parameters for individual patients. The objective of these experiments is to assess the variability of the energy outputs from a small number of lasers across multiple weeks at realistic parameters. The energy outputs from 3 lasers were measured at realistic treatment parameters using a thermopile detector across a period of 6 weeks. All lasers fired in single-pulse mode demonstrated good repeatability of energy output. In spite of one of the lasers being scheduled for a dye canister change in the next 2 weeks, there was good energy matching between the two devices with only a 4%-5% variation in measured energies. Based on the results presented, clinical outcomes should not be influenced by variability in the energy outputs of the dermatology lasers used as part of the treatment procedure. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  9. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  10. Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Feng, E-mail: fwang@unu.edu; Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft; Huisman, Jaco

    2013-11-15

    Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lackmore » of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies.« less

  11. Final Report for UW-Madison Portion of DE-SC0005301, "Collaborative Project: Pacific Decadal Variability and Central Pacific Warming El Niño in a Changing Climate"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vimont, Daniel

    This project funded two efforts at understanding the interactions between Central Pacific ENSO events, the mid-latitude atmosphere, and decadal variability in the Pacific. The first was an investigation of conditions that lead to Central Pacific (CP) and East Pacific (EP) ENSO events through the use of linear inverse modeling with defined norms. The second effort was a modeling study that combined output from the National Center for Atmospheric Research (NCAR) Community Atmospheric Model (CAM4) with the Battisti (1988) intermediate coupled model. The intent of the second activity was to investigate the relationship between the atmospheric North Pacific Oscillation (NPO), themore » Pacific Meridional Mode (PMM), and ENSO. These two activities are described herein.« less

  12. The application of Global Sensitivity Analysis to quantify the dominant input factors for hydraulic model simulations

    NASA Astrophysics Data System (ADS)

    Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2015-04-01

    Predicting flood inundation extents using hydraulic models is subject to a number of critical uncertainties. For a specific event, these uncertainties are known to have a large influence on model outputs and any subsequent analyses made by risk managers. Hydraulic modellers often approach such problems by applying uncertainty analysis techniques such as the Generalised Likelihood Uncertainty Estimation (GLUE) methodology. However, these methods do not allow one to attribute which source of uncertainty has the most influence on the various model outputs that inform flood risk decision making. Another issue facing modellers is the amount of computational resource that is available to spend on modelling flood inundations that are 'fit for purpose' to the modelling objectives. Therefore a balance needs to be struck between computation time, realism and spatial resolution, and effectively characterising the uncertainty spread of predictions (for example from boundary conditions and model parameterisations). However, it is not fully understood how much of an impact each factor has on model performance, for example how much influence changing the spatial resolution of a model has on inundation predictions in comparison to other uncertainties inherent in the modelling process. Furthermore, when resampling fine scale topographic data in the form of a Digital Elevation Model (DEM) to coarser resolutions, there are a number of possible coarser DEMs that can be produced. Deciding which DEM is then chosen to represent the surface elevations in the model could also influence model performance. In this study we model a flood event using the hydraulic model LISFLOOD-FP and apply Sobol' Sensitivity Analysis to estimate which input factor, among the uncertainty in model boundary conditions, uncertain model parameters, the spatial resolution of the DEM and the choice of resampled DEM, have the most influence on a range of model outputs. These outputs include whole domain maximum inundation indicators and flood wave travel time in addition to temporally and spatially variable indicators. This enables us to assess whether the sensitivity of the model to various input factors is stationary in both time and space. Furthermore, competing models are assessed against observations of water depths from a historical flood event. Consequently we are able to determine which of the input factors has the most influence on model performance. Initial findings suggest the sensitivity of the model to different input factors varies depending on the type of model output assessed and at what stage during the flood hydrograph the model output is assessed. We have also found that initial decisions regarding the characterisation of the input factors, for example defining the upper and lower bounds of the parameter sample space, can be significant in influencing the implied sensitivities.

  13. Homeostasis, singularities, and networks.

    PubMed

    Golubitsky, Martin; Stewart, Ian

    2017-01-01

    Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.

  14. Comparing Apples to Apples: Paleoclimate Model-Data comparison via Proxy System Modeling

    NASA Astrophysics Data System (ADS)

    Dee, Sylvia; Emile-Geay, Julien; Evans, Michael; Noone, David

    2014-05-01

    The wealth of paleodata spanning the last millennium (hereinafter LM) provides an invaluable testbed for CMIP5-class GCMs. However, comparing GCM output to paleodata is non-trivial. High-resolution paleoclimate proxies generally contain a multivariate and non-linear response to regional climate forcing. Disentangling the multivariate environmental influences on proxies like corals, speleothems, and trees can be complex due to spatiotemporal climate variability, non-stationarity, and threshold dependence. Given these and other complications, many paleodata-GCM comparisons take a leap of faith, relating climate fields (e.g. precipitation, temperature) to geochemical signals in proxy data (e.g. δ18O in coral aragonite or ice cores) (e.g. Braconnot et al., 2012). Isotope-enabled GCMs are a step in the right direction, with water isotopes providing a connector point between GCMs and paleodata. However, such studies are still rare, and isotope fields are not archived as part of LM PMIP3 simulations. More importantly, much of the complexity in how proxy systems record and transduce environmental signals remains unaccounted for. In this study we use proxy system models (PSMs, Evans et al., 2013) to bridge this conceptual gap. A PSM mathematically encodes the mechanistic understanding of the physical, geochemical and, sometimes biological influences on each proxy. To translate GCM output to proxy space, we have synthesized a comprehensive, consistently formatted package of published PSMs, including δ18O in corals, tree ring cellulose, speleothems, and ice cores. Each PSM is comprised of three sub-models: sensor, archive, and observation. For the first time, these different components are coupled together for four major proxy types, allowing uncertainties due to both dating and signal interpretation to be treated within a self-consistent framework. The output of this process is an ensemble of many (say N = 1,000) realizations of the proxy network, all equally plausible under assumed dating uncertainties. We demonstrate the utility of the PSM framework with an integrative multi-PSM simulation. An intermediate-complexity AGCM with isotope physics (SPEEDY-IER, (Molteni, 2003, Dee et al., in prep)) is used to simulate the isotope hydrology and atmospheric response to SSTs derived from the LM PMIP3 integration of the CCSM4 model (Landrum et al., 2012). Relevant dynamical and isotope variables are then used to drive PSMs, emulating a realistic multiproxy network (Emile-Geay et al., 2013). We then ask the following question: given our best knowledge of proxy systems, what aspects of GCM behavior may be validated, and with what uncertainties? We approach this question via a three-tiered 'perfect model' study. A random realization of the simulated proxy data (hereafter 'PaleoObs') is used as a benchmark in the following comparisons: (1) AGCM output (without isotopes) vs. PaleoObs; (2) AGCM output (with isotopes) vs. PaleoObs; (3) coupled AGCM-PSM-simulated proxy ensemble vs. PaleoObs. Enhancing model-data comparison using PSMs highlights uncertainties that may arise from ignoring non-linearities in proxy-climate relationships, or the presence of age uncertainties (as is most typically done is paleoclimate model-data intercomparison). Companion experiments leveraging the 3 sub-model compartmentalization of PSMs allows us to quantify the contribution of each sub-system to the observed model-data discrepancies. We discuss potential repercussions for model-data comparison and implications for validating predictive climate models using paleodata. References Braconnot, P., Harrison, S. P., Kageyama, M., Bartlein, P. J., Masson-Delmotte, V., Abe-Ouchi, A., Otto-Bliesner, B., Zhao, Y., 06 2012. Evaluation of climate models using palaeoclimatic data. Nature Clim. Change 2 (6), 417-424. URL http://dx.doi.org/10.1038/nclimate1456 Emile-Geay, J., Cobb, K. M., Mann, M. E., Wittenberg, A. T., Apr 01 2013. Estimating central equatorial pacific sst variability over the past millennium. part i: Methodology and validation. Journal of Climate 26 (7), 2302-2328. URL http://search.proquest.com/docview/1350277733?accountid=14749 Evans, M., Tolwinski-Ward, S. E., Thompson, D. M., Anchukaitis, K. J., 2013. Applications of proxy system modeling in high resolution paleoclimatology. Quaternary Science Reviews. URL http://adsabs.harvard.edu/abs/2012QuInt.279U.134E Landrum, L., Otto-Bliesner, B. L., Wahl, E. R., Capotondi, A., Lawrence, P. J., Teng, H., 2012. Last Millennium Climate and Its Variability in CCSM4. Journal of Climate (submitted) Molteni, F., 2003. Atmospheric simulations using a GCM with simplified physical parametrizations. I model climatology and variability in multi-decadal experiments. Climate Dynamics, 175-191

  15. What spatial scales are believable for climate model projections of sea surface temperature?

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Lester; Halloran, Paul R.; Mumby, Peter J.; Stephenson, David B.

    2014-09-01

    Earth system models (ESMs) provide high resolution simulations of variables such as sea surface temperature (SST) that are often used in off-line biological impact models. Coral reef modellers have used such model outputs extensively to project both regional and global changes to coral growth and bleaching frequency. We assess model skill at capturing sub-regional climatologies and patterns of historical warming. This study uses an established wavelet-based spatial comparison technique to assess the skill of the coupled model intercomparison project phase 5 models to capture spatial SST patterns in coral regions. We show that models typically have medium to high skill at capturing climatological spatial patterns of SSTs within key coral regions, with model skill typically improving at larger spatial scales (≥4°). However models have much lower skill at modelling historical warming patters and are shown to often perform no better than chance at regional scales (e.g. Southeast Asian) and worse than chance at finer scales (<8°). Our findings suggest that output from current generation ESMs is not yet suitable for making sub-regional projections of change in coral bleaching frequency and other marine processes linked to SST warming.

  16. Evaluation of a Mysis bioenergetics model

    USGS Publications Warehouse

    Chipps, S.R.; Bennett, D.H.

    2002-01-01

    Direct approaches for estimating the feeding rate of the opossum shrimp Mysis relicta can be hampered by variable gut residence time (evacuation rate models) and non-linear functional responses (clearance rate models). Bioenergetics modeling provides an alternative method, but the reliability of this approach needs to be evaluated using independent measures of growth and food consumption. In this study, we measured growth and food consumption for M. relicta and compared experimental results with those predicted from a Mysis bioenergetics model. For Mysis reared at 10??C, model predictions were not significantly different from observed values. Moreover, decomposition of mean square error indicated that 70% of the variation between model predictions and observed values was attributable to random error. On average, model predictions were within 12% of observed values. A sensitivity analysis revealed that Mysis respiration and prey energy density were the most sensitive parameters affecting model output. By accounting for uncertainty (95% CLs) in Mysis respiration, we observed a significant improvement in the accuracy of model output (within 5% of observed values), illustrating the importance of sensitive input parameters for model performance. These findings help corroborate the Mysis bioenergetics model and demonstrate the usefulness of this approach for estimating Mysis feeding rate.

  17. Reduced-Order Observer Model for Antiaircraft Artillery (AAA) Tracker Response

    DTIC Science & Technology

    1979-08-01

    a22 -ka1 2) z + (a22 - ka1 2) ky + (a21 - kall ) y + (b2 - kb) uc (10) Next, the actual output of this model is expressed as the sum of the output u...a22x2 + b2u + f2eT = (a2 2 - kal2) z + (a2 2 - ka12) ky + (a21 - kall ) y + (b2 - kb l ) uc u=u +vC [Ti Y2] [y] By introducing new variables: X3 = x2...x3 [(a22- ka2)k + (a2- kall ) - (b2- kb) (YI + kY2) Y + [a22 - ka12 - (b2 - kbl) Y2] X3 + (b2 -kbl) y2 e + (2 - kbI) v + (f2 - kfl) 0 T e = (a22- ka12

  18. Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners

    Treesearch

    David H. Newman; David N. Wear

    1993-01-01

    This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...

  19. Utilizing Mars Global Reference Atmospheric Model (Mars-GRAM 2005) to Evaluate Entry Probe Mission Sites

    NASA Technical Reports Server (NTRS)

    Justh, Hilary L.; Justus, Carl G.

    2008-01-01

    The Mars Global Reference Atmospheric Model (Mars-GRAM 2005) is an engineering-level atmospheric model widely used for diverse mission applications. An overview is presented of Mars-GRAM 2005 and its new features. The "auxiliary profile" option is one new feature of Mars-GRAM 2005. This option uses an input file of temperature and density versus altitude to replace the mean atmospheric values from Mars-GRAM's conventional (General Circulation Model) climatology. Any source of data or alternate model output can be used to generate an auxiliary profile. Auxiliary profiles for this study were produced from mesoscale model output (Southwest Research Institute's Mars Regional Atmospheric Modeling System (MRAMS) model and Oregon State University's Mars mesoscale model (MMM5) model) and a global Thermal Emission Spectrometer (TES) database. The global TES database has been specifically generated for purposes of making Mars-GRAM auxiliary profiles. This data base contains averages and standard deviations of temperature, density, and thermal wind components, averaged over 5-by-5 degree latitude-longitude bins and 15 degree Ls bins, for each of three Mars years of TES nadir data. The Mars Science Laboratory (MSL) sites are used as a sample of how Mars-GRAM' could be a valuable tool for planning of future Mars entry probe missions. Results are presented using auxiliary profiles produced from the mesoscale model output and TES observed data for candidate MSL landing sites. Input parameters rpscale (for density perturbations) and rwscale (for wind perturbations) can be used to "recalibrate" Mars-GRAM perturbation magnitudes to better replicate observed or mesoscale model variability.

  20. Sensitivity of a juvenile subject-specific musculoskeletal model of the ankle joint to the variability of operator-dependent input

    PubMed Central

    Hannah, Iain; Montefiori, Erica; Modenese, Luca; Prinold, Joe; Viceconti, Marco; Mazzà, Claudia

    2017-01-01

    Subject-specific musculoskeletal modelling is especially useful in the study of juvenile and pathological subjects. However, such methodologies typically require a human operator to identify key landmarks from medical imaging data and are thus affected by unavoidable variability in the parameters defined and subsequent model predictions. The aim of this study was to thus quantify the inter- and intra-operator repeatability of a subject-specific modelling methodology developed for the analysis of subjects with juvenile idiopathic arthritis. Three operators each created subject-specific musculoskeletal foot and ankle models via palpation of bony landmarks, adjustment of geometrical muscle points and definition of joint coordinate systems. These models were then fused to a generic Arnold lower limb model for each of three modelled patients. The repeatability of each modelling operation was found to be comparable to those previously reported for the modelling of healthy, adult subjects. However, the inter-operator repeatability of muscle point definition was significantly greater than intra-operator repeatability (p < 0.05) and predicted ankle joint contact forces ranged by up to 24% and 10% of the peak force for the inter- and intra-operator analyses, respectively. Similarly, the maximum inter- and intra-operator variations in muscle force output were 64% and 23% of peak force, respectively. Our results suggest that subject-specific modelling is operator dependent at the foot and ankle, with the definition of muscle geometry the most significant source of output uncertainty. The development of automated procedures to prevent the misplacement of crucial muscle points should therefore be considered a particular priority for those developing subject-specific models. PMID:28427313

  1. Sensitivity of a juvenile subject-specific musculoskeletal model of the ankle joint to the variability of operator-dependent input.

    PubMed

    Hannah, Iain; Montefiori, Erica; Modenese, Luca; Prinold, Joe; Viceconti, Marco; Mazzà, Claudia

    2017-05-01

    Subject-specific musculoskeletal modelling is especially useful in the study of juvenile and pathological subjects. However, such methodologies typically require a human operator to identify key landmarks from medical imaging data and are thus affected by unavoidable variability in the parameters defined and subsequent model predictions. The aim of this study was to thus quantify the inter- and intra-operator repeatability of a subject-specific modelling methodology developed for the analysis of subjects with juvenile idiopathic arthritis. Three operators each created subject-specific musculoskeletal foot and ankle models via palpation of bony landmarks, adjustment of geometrical muscle points and definition of joint coordinate systems. These models were then fused to a generic Arnold lower limb model for each of three modelled patients. The repeatability of each modelling operation was found to be comparable to those previously reported for the modelling of healthy, adult subjects. However, the inter-operator repeatability of muscle point definition was significantly greater than intra-operator repeatability ( p < 0.05) and predicted ankle joint contact forces ranged by up to 24% and 10% of the peak force for the inter- and intra-operator analyses, respectively. Similarly, the maximum inter- and intra-operator variations in muscle force output were 64% and 23% of peak force, respectively. Our results suggest that subject-specific modelling is operator dependent at the foot and ankle, with the definition of muscle geometry the most significant source of output uncertainty. The development of automated procedures to prevent the misplacement of crucial muscle points should therefore be considered a particular priority for those developing subject-specific models.

  2. Quantifying measurement uncertainty and spatial variability in the context of model evaluation

    NASA Astrophysics Data System (ADS)

    Choukulkar, A.; Brewer, A.; Pichugina, Y. L.; Bonin, T.; Banta, R. M.; Sandberg, S.; Weickmann, A. M.; Djalalova, I.; McCaffrey, K.; Bianco, L.; Wilczak, J. M.; Newman, J. F.; Draxl, C.; Lundquist, J. K.; Wharton, S.; Olson, J.; Kenyon, J.; Marquis, M.

    2017-12-01

    In an effort to improve wind forecasts for the wind energy sector, the Department of Energy and the NOAA funded the second Wind Forecast Improvement Project (WFIP2). As part of the WFIP2 field campaign, a large suite of in-situ and remote sensing instrumentation was deployed to the Columbia River Gorge in Oregon and Washington from October 2015 - March 2017. The array of instrumentation deployed included 915-MHz wind profiling radars, sodars, wind- profiling lidars, and scanning lidars. The role of these instruments was to provide wind measurements at high spatial and temporal resolution for model evaluation and improvement of model physics. To properly determine model errors, the uncertainties in instrument-model comparisons need to be quantified accurately. These uncertainties arise from several factors such as measurement uncertainty, spatial variability, and interpolation of model output to instrument locations, to name a few. In this presentation, we will introduce a formalism to quantify measurement uncertainty and spatial variability. The accuracy of this formalism will be tested using existing datasets such as the eXperimental Planetary boundary layer Instrumentation Assessment (XPIA) campaign. Finally, the uncertainties in wind measurement and the spatial variability estimates from the WFIP2 field campaign will be discussed to understand the challenges involved in model evaluation.

  3. Using lumped modelling for providing simple metrics and associated uncertainties of catchment response to agricultural-derived nitrates pollutions

    NASA Astrophysics Data System (ADS)

    RUIZ, L.; Fovet, O.; Faucheux, M.; Molenat, J.; Sekhar, M.; Aquilina, L.; Gascuel-odoux, C.

    2013-12-01

    The development of simple and easily accessible metrics is required for characterizing and comparing catchment response to external forcings (climate or anthropogenic) and for managing water resources. The hydrological and geochemical signatures in the stream represent the integration of the various processes controlling this response. The complexity of these signatures over several time scales from sub-daily to several decades [Kirchner et al., 2001] makes their deconvolution very difficult. A large range of modeling approaches intent to represent this complexity by accounting for the spatial and/or temporal variability of the processes involved. However, simple metrics are not easily retrieved from these approaches, mostly because of over-parametrization issues. We hypothesize that to obtain relevant metrics, we need to use models that are able to simulate the observed variability of river signatures at different time scales, while being as parsimonious as possible. The lumped model ETNA (modified from[Ruiz et al., 2002]) is able to simulate adequately the seasonal and inter-annual patterns of stream NO3 concentration. Shallow groundwater is represented by two linear stores with double porosity and riparian processes are represented by a constant nitrogen removal function. Our objective was to identify simple metrics of catchment response by calibrating this lumped model on two paired agricultural catchments where both N inputs and outputs were monitored for a period of 20 years. These catchments, belonging to ORE AgrHys, although underlain by the same granitic bedrock are displaying contrasted chemical signatures. The model was able to simulate the two contrasted observed patterns in stream and groundwater, both on hydrology and chemistry, and at the seasonal and pluri-annual scales. It was also compatible with the expected trends of nitrate concentration since 1960. The output variables of the model were used to compute the nitrate residence time in both the catchments. We used the Global Likelihood Uncertainty Estimations (GLUE) approach [Beven and Binley, 1992] to assess the parameter uncertainties and the subsequent error in model outputs and residence times. Reasonably low parameter uncertainties were obtained by calibrating simultaneously the two paired catchments with two outlets time series of stream flow and nitrate concentrations. Finally, only one parameter controlled the contrast in nitrogen residence times between the catchments. Therefore, this approach provided a promising metric for classifying the variability of catchment response to agricultural nitrogen inputs. Beven, K., and A. Binley (1992), THE FUTURE OF DISTRIBUTED MODELS - MODEL CALIBRATION AND UNCERTAINTY PREDICTION, Hydrological Processes, 6(3), 279-298. Kirchner, J. W., X. Feng, and C. Neal (2001), Catchment-scale advection and dispersion as a mechanism for fractal scaling in stream tracer concentrations, Journal of Hydrology, 254(1-4), 82-101. Ruiz, L., S. Abiven, C. Martin, P. Durand, V. Beaujouan, and J. Molenat (2002), Effect on nitrate concentration in stream water of agricultural practices in small catchments in Brittany : II. Temporal variations and mixing processes, Hydrology and Earth System Sciences, 6(3), 507-513.

  4. Dimensional reduction in sensorimotor systems: A framework for understanding muscle coordination of posture

    PubMed Central

    Ting, Lena H.

    2014-01-01

    The simple act of standing up is an important and essential motor behavior that most humans and animals achieve with ease. Yet, maintaining standing balance involves complex sensorimotor transformations that must continually integrate a large array of sensory inputs and coordinate multiple motor outputs to muscles throughout the body. Multiple, redundant local sensory signals are integrated to form an estimate of a few global, task-level variables important to postural control, such as body center of mass position and body orientation with respect to Earth-vertical. Evidence suggests that a limited set of muscle synergies, reflecting preferential sets of muscle activation patterns, are used to move task variables such as center of mass position in a predictable direction following a postural perturbations. We propose a hierarchal feedback control system that allows the nervous system the simplicity of performing goal-directed computations in task-variable space, while maintaining the robustness afforded by redundant sensory and motor systems. We predict that modulation of postural actions occurs in task-variable space, and in the associated transformations between the low-dimensional task-space and high-dimensional sensor and muscle spaces. Development of neuromechanical models that reflect these neural transformations between low and high-dimensional representations will reveal the organizational principles and constraints underlying sensorimotor transformations for balance control, and perhaps motor tasks in general. This framework and accompanying computational models could be used to formulate specific hypotheses about how specific sensory inputs and motor outputs are generated and altered following neural injury, sensory loss, or rehabilitation. PMID:17925254

  5. Flight dynamics analysis and simulation of heavy lift airships. Volume 3: User's manual

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    The User's Manual provides the basic information necessary to run the programs. This includes descriptions of the various data files necessary for the program, the various outputs from the program and the options available to the user when executing the program. Additional data file information is contained in the three appendices to the manual. These appendices list all input variables and their permissible values, an example listing of these variables, and all output variables available to the user.

  6. Tracking problem for electromechanical system under influence of external perturbations

    NASA Astrophysics Data System (ADS)

    Kochetkov, Sergey A.; Krasnova, Svetlana A.; Utkin, Victor A.

    2017-01-01

    For electromechanical objects the new control algorithms (vortex algprithms) are developed on the base of discontinuous functions. The distinctive feature of these algorithms is providing of asymptotical convergence of the output variables to zero under influence of unknown bounded disturbances of prescribed class. The advantages of proposed approach is demonstrated for direct current motor with permanent excitation. It is shown that inner variables of the system converge to unknown bounded disturbances and guarantee asymptotical convergence of output variables to zero.

  7. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  8. A fuzzy model of superelastic shape memory alloys for vibration control in civil engineering applications

    NASA Astrophysics Data System (ADS)

    Ozbulut, O. E.; Mir, C.; Moroni, M. O.; Sarrazin, M.; Roschke, P. N.

    2007-06-01

    Two experimental test programs are conducted to collect data and simulate the dynamic behavior of CuAlBe shape memory alloy (SMA) wires. First, in order to evaluate the effect of temperature changes on superelastic SMA wires, a large number of cyclic, sinusoidal, tensile tests are performed at 1 Hz. These tests are conducted in a controlled environment at 0, 25 and 50 °C with three different strain amplitudes. Second, in order to assess the dynamic effects of the material, a series of laboratory experiments is conducted on a shake table with a scale model of a three-story structure that is stiffened with SMA wires. Data from these experiments are used to create fuzzy inference systems (FISs) that can predict hysteretic behavior of CuAlBe wire. Both fuzzy models employ a total of three input variables (strain, strain-rate, and temperature or pre-stress) and an output variable (predicted stress). Gaussian membership functions are used to fuzzify data for each of the input and output variables. Values of the initially assigned membership functions are adjusted using a neural-fuzzy procedure to more accurately predict the correct stress level in the wires. Results of the trained FISs are validated using test results from experimental records that had not been previously used in the training procedure. Finally, a set of numerical simulations is conducted to illustrate practical use of these wires in a civil engineering application. The results reveal the applicability for structural vibration control of pseudoelastic CuAlBe wire whose highly nonlinear behavior is modeled by a simple, accurate, and computationally efficient FIS.

  9. Identification and agreement of first turn point by mathematical analysis applied to heart rate, carbon dioxide output and electromyography

    PubMed Central

    Zamunér, Antonio R.; Catai, Aparecida M.; Martins, Luiz E. B.; Sakabe, Daniel I.; Silva, Ester Da

    2013-01-01

    Background The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. Objectives To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output () using two mathematical models and to compare the results to those of the visual method. Method Ten sedentary middle-aged men (53.9±3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between and oxygen uptake (); 2) the linear-linear model, based on fitting the curves to the set of data (Lin-Lin ); 3) a bi-segmental linear regression of Hinkley' s algorithm applied to HR (HMM-HR), (HMM- ), and sEMG data (HMM-RMS). Results There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-Lin , HMM-HR, HMM-CO2, and HMM-RMS. Conclusion The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of , HR responses, and sEMG. PMID:24346296

  10. Identification and agreement of first turn point by mathematical analysis applied to heart rate, carbon dioxide output and electromyography.

    PubMed

    Zamunér, Antonio R; Catai, Aparecida M; Martins, Luiz E B; Sakabe, Daniel I; Da Silva, Ester

    2013-01-01

    The second heart rate (HR) turn point has been extensively studied, however there are few studies determining the first HR turn point. Also, the use of mathematical and statistical models for determining changes in dynamic characteristics of physiological variables during an incremental cardiopulmonary test has been suggested. To determine the first turn point by analysis of HR, surface electromyography (sEMG), and carbon dioxide output (VCO2) using two mathematical models and to compare the results to those of the visual method. Ten sedentary middle-aged men (53.9 ± 3.2 years old) were submitted to cardiopulmonary exercise testing on an electromagnetic cycle ergometer until exhaustion. Ventilatory variables, HR, and sEMG of the vastus lateralis were obtained in real time. Three methods were used to determine the first turn point: 1) visual analysis based on loss of parallelism between VCO2 and oxygen uptake (VO2); 2) the linear-linear model, based on fitting the curves to the set of VCO2 data (Lin-LinVCO2); 3) a bi-segmental linear regression of Hinkley's algorithm applied to HR (HMM-HR), VCO2 (HMM-VCO2), and sEMG data (HMM-RMS). There were no differences between workload, HR, and ventilatory variable values at the first ventilatory turn point as determined by the five studied parameters (p>0.05). The Bland-Altman plot showed an even distribution of the visual analysis method with Lin-LinVCO2, HMM-HR, HMM-VCO2, and HMM-RMS. The proposed mathematical models were effective in determining the first turn point since they detected the linear pattern change and the deflection point of VCO2, HR responses, and sEMG.

  11. Combination of Alternative Models by Mutual Data Assimilation: Supermodeling With A Suite of Primitive Equation Models

    NASA Astrophysics Data System (ADS)

    Duane, G. S.; Selten, F.

    2016-12-01

    Different models of climate and weather commonly give projections/predictions that differ widely in their details. While averaging of model outputs almost always improves results, nonlinearity implies that further improvement can be obtained from model interaction in run time, as has already been demonstrated with toy systems of ODEs and idealized quasigeostrophic models. In the supermodeling scheme, models effectively assimilate data from one another and partially synchronize with one another. Spread among models is manifest as a spread in possible inter-model connection coefficients, so that the models effectively "agree to disagree". Here, we construct a supermodel formed from variants of the SPEEDO model, a primitive-equation atmospheric model (SPEEDY) coupled to ocean and land. A suite of atmospheric models, coupled to the same ocean and land, is chosen to represent typical differences among climate models by varying model parameters. Connections are introduced between all pairs of corresponding independent variables at synoptic-scale intervals. Strengths of the inter-atmospheric connections can be considered to represent inverse inter-model observation error. Connection strengths are adapted based on an established procedure that extends the dynamical equations of a pair of synchronizing systems to synchronize parameters as well. The procedure is applied to synchronize the suite of SPEEDO models with another SPEEDO model regarded as "truth", adapting the inter-model connections along the way. The supermodel with trained connections gives marginally lower error in all fields than any weighted combination of the separate model outputs when used in "weather-prediction mode", i.e. with constant nudging to truth. Stronger results are obtained if a supermodel is used to predict the formation of coherent structures or the frequency of such. Partially synchronized SPEEDO models give a better representation of the blocked-zonal index cycle than does a weighted average of the constituent model outputs. We have thus shown that supermodeling and the synchronization-based procedure to adapt inter-model connections give results superior to output averaging not only with highly nonlinear toy systems, but with smaller nonlinearities as occur in climate models.

  12. Parameter and state estimation in a Neisseria meningitidis model: A study case of Niger

    NASA Astrophysics Data System (ADS)

    Bowong, S.; Mountaga, L.; Bah, A.; Tewa, J. J.; Kurths, J.

    2016-12-01

    Neisseria meningitidis (Nm) is a major cause of bacterial meningitidis outbreaks in Africa and the Middle East. The availability of yearly reported meningitis cases in the African meningitis belt offers the opportunity to analyze the transmission dynamics and the impact of control strategies. In this paper, we propose a method for the estimation of state variables that are not accessible to measurements and an unknown parameter in a Nm model. We suppose that the yearly number of Nm induced mortality and the total population are known inputs, which can be obtained from data, and the yearly number of new Nm cases is the model output. We also suppose that the Nm transmission rate is an unknown parameter. We first show how the recruitment rate into the population can be estimated using real data of the total population and Nm induced mortality. Then, we use an auxiliary system called observer whose solutions converge exponentially to those of the original model. This observer does not use the unknown infection transmission rate but only uses the known inputs and the model output. This allows us to estimate unmeasured state variables such as the number of carriers that play an important role in the transmission of the infection and the total number of infected individuals within a human community. Finally, we also provide a simple method to estimate the unknown Nm transmission rate. In order to validate the estimation results, numerical simulations are conducted using real data of Niger.

  13. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    PubMed

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.

  14. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study.

    PubMed

    van der Krieke, Lian; Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith Gm; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-08-07

    Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher's tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use.

  15. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  16. Ecological Momentary Assessments and Automated Time Series Analysis to Promote Tailored Health Care: A Proof-of-Principle Study

    PubMed Central

    Emerencia, Ando C; Bos, Elisabeth H; Rosmalen, Judith GM; Riese, Harriëtte; Aiello, Marco; Sytema, Sjoerd; de Jonge, Peter

    2015-01-01

    Background Health promotion can be tailored by combining ecological momentary assessments (EMA) with time series analysis. This combined method allows for studying the temporal order of dynamic relationships among variables, which may provide concrete indications for intervention. However, application of this method in health care practice is hampered because analyses are conducted manually and advanced statistical expertise is required. Objective This study aims to show how this limitation can be overcome by introducing automated vector autoregressive modeling (VAR) of EMA data and to evaluate its feasibility through comparisons with results of previously published manual analyses. Methods We developed a Web-based open source application, called AutoVAR, which automates time series analyses of EMA data and provides output that is intended to be interpretable by nonexperts. The statistical technique we used was VAR. AutoVAR tests and evaluates all possible VAR models within a given combinatorial search space and summarizes their results, thereby replacing the researcher’s tasks of conducting the analysis, making an informed selection of models, and choosing the best model. We compared the output of AutoVAR to the output of a previously published manual analysis (n=4). Results An illustrative example consisting of 4 analyses was provided. Compared to the manual output, the AutoVAR output presents similar model characteristics and statistical results in terms of the Akaike information criterion, the Bayesian information criterion, and the test statistic of the Granger causality test. Conclusions Results suggest that automated analysis and interpretation of times series is feasible. Compared to a manual procedure, the automated procedure is more robust and can save days of time. These findings may pave the way for using time series analysis for health promotion on a larger scale. AutoVAR was evaluated using the results of a previously conducted manual analysis. Analysis of additional datasets is needed in order to validate and refine the application for general use. PMID:26254160

  17. Forward Modeling of Oxygen Isotope Variability in Tropical Andean Ice Cores

    NASA Astrophysics Data System (ADS)

    Vuille, M. F.; Hurley, J. V.; Hardy, D. R.

    2016-12-01

    Ice core records from the tropical Andes serve as important archives of past tropical Pacific SST variability and changes in monsoon intensity upstream over the Amazon basin. Yet the interpretation of the oxygen isotopic signal in these ice cores remains controversial. Based on 10 years of continuous on-site glaciologic, meteorologic and isotopic measurements at the summit of the world's largest tropical ice cap, Quelccaya, in southern Peru, we developed a process-based physical forward model (proxy system model), capable of simulating intraseasonal, seasonal and interannual variability in delta-18O as observed in snow pits and short cores. Our results highlight the importance of taking into account post-depositional effects (sublimation and isotopic enrichment) to properly simulate the seasonal cycle. Intraseasonal variability is underestimated in our model unless the effects of cold air incursions, triggering significant monsoonal snowfall and more negative delta-18O values, are included. A number of sensitivity test highlight the influence of changing boundary conditions on the final snow isotopic profile. Such tests also show that our model provides much more realistic data than applying direct model output of precipitation delta-18O from isotope-enabled climate models (SWING ensemble). The forward model was calibrated with and run under present-day conditions, but it can also be driven with past climate forcings to reconstruct paleo-monsoon variability and investigate the influence of changes in radiative forcings (solar, volcanic) on delta-18O variability in Andean snow. The model is transferable and may be used to render a paleoclimatic context at other ice core locations.

  18. Variable self-powered light detection CMOS chip with real-time adaptive tracking digital output based on a novel on-chip sensor.

    PubMed

    Wang, HongYi; Fan, Youyou; Lu, Zhijian; Luo, Tao; Fu, Houqiang; Song, Hongjiang; Zhao, Yuji; Christen, Jennifer Blain

    2017-10-02

    This paper provides a solution for a self-powered light direction detection with digitized output. Light direction sensors, energy harvesting photodiodes, real-time adaptive tracking digital output unit and other necessary circuits are integrated on a single chip based on a standard 0.18 µm CMOS process. Light direction sensors proposed have an accuracy of 1.8 degree over a 120 degree range. In order to improve the accuracy, a compensation circuit is presented for photodiodes' forward currents. The actual measurement precision of output is approximately 7 ENOB. Besides that, an adaptive under voltage protection circuit is designed for variable supply power which may undulate with temperature and process.

  19. Progress with lossy compression of data from the Community Earth System Model

    NASA Astrophysics Data System (ADS)

    Xu, H.; Baker, A.; Hammerling, D.; Li, S.; Clyne, J.

    2017-12-01

    Climate models, such as the Community Earth System Model (CESM), generate massive quantities of data, particularly when run at high spatial and temporal resolutions. The burden of storage is further exacerbated by creating large ensembles, generating large numbers of variables, outputting at high frequencies, and duplicating data archives (to protect against disk failures). Applying lossy compression methods to CESM datasets is an attractive means of reducing data storage requirements, but ensuring that the loss of information does not negatively impact science objectives is critical. In particular, test methods are needed to evaluate whether critical features (e.g., extreme values and spatial and temporal gradients) have been preserved and to boost scientists' confidence in the lossy compression process. We will provide an overview on our progress in applying lossy compression to CESM output and describe our unique suite of metric tests that evaluate the impact of information loss. Further, we will describe our processes how to choose an appropriate compression algorithm (and its associated parameters) given the diversity of CESM data (e.g., variables may be constant, smooth, change abruptly, contain missing values, or have large ranges). Traditional compression algorithms, such as those used for images, are not necessarily ideally suited for floating-point climate simulation data, and different methods may have different strengths and be more effective for certain types of variables than others. We will discuss our progress towards our ultimate goal of developing an automated multi-method parallel approach for compression of climate data that both maximizes data reduction and minimizes the impact of data loss on science results.

  20. Simulating Pacific Northwest Forest Response to Climate Change: How We Made Model Results Useful for Vulnerability Assessments

    NASA Astrophysics Data System (ADS)

    Kim, J. B.; Kerns, B. K.; Halofsky, J.

    2014-12-01

    GCM-based climate projections and downscaled climate data proliferate, and there are many climate-aware vegetation models in use by researchers. Yet application of fine-scale DGVM based simulation output in national forest vulnerability assessments is not common, because there are technical, administrative and social barriers for their use by managers and policy makers. As part of a science-management climate change adaptation partnership, we performed simulations of vegetation response to climate change for four national forests in the Blue Mountains of Oregon using the MC2 dynamic global vegetation model (DGVM) for use in vulnerability assessments. Our simulation results under business-as-usual scenarios suggest a starkly different future forest conditions for three out of the four national forests in the study area, making their adoption by forest managers a potential challenge. However, using DGVM output to structure discussion of potential vegetation changes provides a suitable framework to discuss the dynamic nature of vegetation change compared to using more commonly available model output (e.g. species distribution models). From the onset, we planned and coordinated our work with national forest managers to maximize the utility and the consideration of the simulation results in planning. Key lessons from this collaboration were: (1) structured and strategic selection of a small number climate change scenarios that capture the range of variability in future conditions simplified results; (2) collecting and integrating data from managers for use in simulations increased support and interest in applying output; (3) a structured, regionally focused, and hierarchical calibration of the DGVM produced well-validated results; (4) simple approaches to quantifying uncertainty in simulation results facilitated communication; and (5) interpretation of model results in a holistic context in relation to multiple lines of evidence produced balanced guidance. This latest point demonstrates the importance of using model out as a forum for discussion along with other information, rather than using model output in an inappropriately predictive sense. These lessons are being applied currently to other national forests in the Pacific Northwest to contribute in vulnerability assessments.

  1. Evaluating wind extremes in CMIP5 climate models

    NASA Astrophysics Data System (ADS)

    Kumar, Devashish; Mishra, Vimal; Ganguly, Auroop R.

    2015-07-01

    Wind extremes have consequences for renewable energy sectors, critical infrastructures, coastal ecosystems, and insurance industry. Considerable debates remain regarding the impacts of climate change on wind extremes. While climate models have occasionally shown increases in regional wind extremes, a decline in the magnitude of mean and extreme near-surface wind speeds has been recently reported over most regions of the Northern Hemisphere using observed data. Previous studies of wind extremes under climate change have focused on selected regions and employed outputs from the regional climate models (RCMs). However, RCMs ultimately rely on the outputs of global circulation models (GCMs), and the value-addition from the former over the latter has been questioned. Regional model runs rarely employ the full suite of GCM ensembles, and hence may not be able to encapsulate the most likely projections or their variability. Here we evaluate the performance of the latest generation of GCMs, the Coupled Model Intercomparison Project phase 5 (CMIP5), in simulating extreme winds. We find that the multimodel ensemble (MME) mean captures the spatial variability of annual maximum wind speeds over most regions except over the mountainous terrains. However, the historical temporal trends in annual maximum wind speeds for the reanalysis data, ERA-Interim, are not well represented in the GCMs. The historical trends in extreme winds from GCMs are statistically not significant over most regions. The MME model simulates the spatial patterns of extreme winds for 25-100 year return periods. The projected extreme winds from GCMs exhibit statistically less significant trends compared to the historical reference period.

  2. Hydrologic climate change impacts in the Columbia River Basin and their sensitivity to methodological choices

    NASA Astrophysics Data System (ADS)

    Chegwidden, O.; Nijssen, B.; Mao, Y.; Rupp, D. E.

    2016-12-01

    The Columbia River Basin (CRB) in the United States' Pacific Northwest (PNW) is highly regulated for hydropower generation, flood control, fish survival, irrigation and navigation. Historically it has had a hydrologic regime characterized by winter precipitation in the form of snow, followed by a spring peak in streamflow from snowmelt. Anthropogenic climate change is expected to significantly alter this regime, causing changes to streamflow timing and volume. While numerous hydrologic studies have been conducted across the CRB, the impact of methodological choices in hydrologic modeling has not been as heavily investigated. To better understand their impact on the spread in modeled projections of hydrological change, we ran simulations involving permutations of a variety of methodological choices. We used outputs from ten global climate models (GCMs) and two representative concentration pathways from the Intergovernmental Panel on Climate Change's Fifth Assessment Report. After downscaling the GCM output using three different techniques we forced the Variable Infiltration Capacity (VIC) model and the Precipitation Runoff Modeling System (PRMS), both implemented at 1/16th degree ( 5 km) for the period 1950-2099. For the VIC model, we used three independently-derived parameter sets. We will show results from the range of simulations, both in the form of basin-wide spatial analyses of hydrologic variables and through analyses of changes in streamflow at selected sites throughout the CRB. We will then discuss the differences in sensitivities to climate change seen among the projections, paying particular attention to differences in projections from the hydrologic models and different parameter sets.

  3. Regional model simulations of New Zealand climate

    NASA Astrophysics Data System (ADS)

    Renwick, James A.; Katzfey, Jack J.; Nguyen, Kim C.; McGregor, John L.

    1998-03-01

    Simulation of New Zealand climate is examined through the use of a regional climate model nested within the output of the Commonwealth Scientific and Industrial Research Organisation nine-level general circulation model (GCM). R21 resolution GCM output is used to drive a regional model run at 125 km grid spacing over the Australasian region. The 125 km run is used in turn to drive a simulation at 50 km resolution over New Zealand. Simulations with a full seasonal cycle are performed for 10 model years. The focus is on the quality of the simulation of present-day climate, but results of a doubled-CO2 run are discussed briefly. Spatial patterns of mean simulated precipitation and surface temperatures improve markedly as horizontal resolution is increased, through the better resolution of the country's orography. However, increased horizontal resolution leads to a positive bias in precipitation. At 50 km resolution, simulated frequency distributions of daily maximum/minimum temperatures are statistically similar to those of observations at many stations, while frequency distributions of daily precipitation appear to be statistically different to those of observations at most stations. Modeled daily precipitation variability at 125 km resolution is considerably less than observed, but is comparable to, or exceeds, observed variability at 50 km resolution. The sensitivity of the simulated climate to changes in the specification of the land surface is discussed briefly. Spatial patterns of the frequency of extreme temperatures and precipitation are generally well modeled. Under a doubling of CO2, the frequency of precipitation extremes changes only slightly at most locations, while air frosts become virtually unknown except at high-elevation sites.

  4. Mars approach for global sensitivity analysis of differential equation models with applications to dynamics of influenza infection.

    PubMed

    Lee, Yeonok; Wu, Hulin

    2012-01-01

    Differential equation models are widely used for the study of natural phenomena in many fields. The study usually involves unknown factors such as initial conditions and/or parameters. It is important to investigate the impact of unknown factors (parameters and initial conditions) on model outputs in order to better understand the system the model represents. Apportioning the uncertainty (variation) of output variables of a model according to the input factors is referred to as sensitivity analysis. In this paper, we focus on the global sensitivity analysis of ordinary differential equation (ODE) models over a time period using the multivariate adaptive regression spline (MARS) as a meta model based on the concept of the variance of conditional expectation (VCE). We suggest to evaluate the VCE analytically using the MARS model structure of univariate tensor-product functions which is more computationally efficient. Our simulation studies show that the MARS model approach performs very well and helps to significantly reduce the computational cost. We present an application example of sensitivity analysis of ODE models for influenza infection to further illustrate the usefulness of the proposed method.

  5. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  6. Surface laser marking optimization using an experimental design approach

    NASA Astrophysics Data System (ADS)

    Brihmat-Hamadi, F.; Amara, E. H.; Lavisse, L.; Jouvard, J. M.; Cicala, E.; Kellou, H.

    2017-04-01

    Laser surface marking is performed on a titanium substrate using a pulsed frequency doubled Nd:YAG laser ( λ= 532 nm, τ pulse=5 ns) to process the substrate surface under normal atmospheric conditions. The aim of the work is to investigate, following experimental and statistical approaches, the correlation between the process parameters and the response variables (output), using a Design of Experiment method (DOE): Taguchi methodology and a response surface methodology (RSM). A design is first created using MINTAB program, and then the laser marking process is performed according to the planned design. The response variables; surface roughness and surface reflectance were measured for each sample, and incorporated into the design matrix. The results are then analyzed and the RSM model is developed and verified for predicting the process output for the given set of process parameters values. The analysis shows that the laser beam scanning speed is the most influential operating factor followed by the laser pumping intensity during marking, while the other factors show complex influences on the objective functions.

  7. Balancing Europe's wind power output through spatial deployment informed by weather regimes.

    PubMed

    Grams, Christian M; Beerli, Remo; Pfenninger, Stefan; Staffell, Iain; Wernli, Heini

    2017-08-01

    As wind and solar power provide a growing share of Europe's electricity1, understanding and accommodating their variability on multiple timescales remains a critical problem. On weekly timescales, variability is related to long-lasting weather conditions, called weather regimes2-5, which can cause lulls with a loss of wind power across neighbouring countries6. Here we show that weather regimes provide a meteorological explanation for multi-day fluctuations in Europe's wind power and can help guide new deployment pathways which minimise this variability. Mean generation during different regimes currently ranges from 22 GW to 44 GW and is expected to triple by 2030 with current planning strategies. However, balancing future wind capacity across regions with contrasting inter-regime behaviour - specifically deploying in the Balkans instead of the North Sea - would almost eliminate these output variations, maintain mean generation, and increase fleet-wide minimum output. Solar photovoltaics could balance low-wind regimes locally, but only by expanding current capacity tenfold. New deployment strategies based on an understanding of continent-scale wind patterns and pan-European collaboration could enable a high share of wind energy whilst minimising the negative impacts of output variability.

  8. Associated and Mediating Variables Related to Job Satisfaction among Professionals from Mental Health Teams.

    PubMed

    Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie; Chiocchio, François

    2018-06-01

    Using a structural analysis, this study examines the relationship between job satisfaction among 315 mental health professionals from the province of Quebec (Canada) and a wide range of variables related to provider characteristics, team characteristics, processes, and emergent states, and organizational culture. We used the Job Satisfaction Survey to assess job satisfaction. Our conceptual framework integrated numerous independent variables adapted from the input-mediator-output-input (IMOI) model and the Integrated Team Effectiveness Model (ITEM). The structural equation model predicted 47% of the variance of job satisfaction. Job satisfaction was associated with eight variables: strong team support, participation in the decision-making process, closer collaboration, fewer conflicts among team members, modest knowledge production (team processes), firm affective commitment, multifocal identification (emergent states) and belonging to the nursing profession (provider characteristics). Team climate had an impact on six job satisfaction variables (team support, knowledge production, conflicts, affective commitment, collaboration, and multifocal identification). Results show that team processes and emergent states were mediators between job satisfaction and team climate. To increase job satisfaction among professionals, health managers need to pursue strategies that foster a positive climate within mental health teams.

  9. Effect of promoter architecture on the cell-to-cell variability in gene expression.

    PubMed

    Sanchez, Alvaro; Garcia, Hernan G; Jones, Daniel; Phillips, Rob; Kondev, Jané

    2011-03-01

    According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well.

  10. Effect of Promoter Architecture on the Cell-to-Cell Variability in Gene Expression

    PubMed Central

    Sanchez, Alvaro; Garcia, Hernan G.; Jones, Daniel; Phillips, Rob; Kondev, Jané

    2011-01-01

    According to recent experimental evidence, promoter architecture, defined by the number, strength and regulatory role of the operators that control transcription, plays a major role in determining the level of cell-to-cell variability in gene expression. These quantitative experiments call for a corresponding modeling effort that addresses the question of how changes in promoter architecture affect variability in gene expression in a systematic rather than case-by-case fashion. In this article we make such a systematic investigation, based on a microscopic model of gene regulation that incorporates stochastic effects. In particular, we show how operator strength and operator multiplicity affect this variability. We examine different modes of transcription factor binding to complex promoters (cooperative, independent, simultaneous) and how each of these affects the level of variability in transcriptional output from cell-to-cell. We propose that direct comparison between in vivo single-cell experiments and theoretical predictions for the moments of the probability distribution of mRNA number per cell can be used to test kinetic models of gene regulation. The emphasis of the discussion is on prokaryotic gene regulation, but our analysis can be extended to eukaryotic cells as well. PMID:21390269

  11. Improved Regression Analysis of Temperature-Dependent Strain-Gage Balance Calibration Data

    NASA Technical Reports Server (NTRS)

    Ulbrich, N.

    2015-01-01

    An improved approach is discussed that may be used to directly include first and second order temperature effects in the load prediction algorithm of a wind tunnel strain-gage balance. The improved approach was designed for the Iterative Method that fits strain-gage outputs as a function of calibration loads and uses a load iteration scheme during the wind tunnel test to predict loads from measured gage outputs. The improved approach assumes that the strain-gage balance is at a constant uniform temperature when it is calibrated and used. First, the method introduces a new independent variable for the regression analysis of the balance calibration data. The new variable is designed as the difference between the uniform temperature of the balance and a global reference temperature. This reference temperature should be the primary calibration temperature of the balance so that, if needed, a tare load iteration can be performed. Then, two temperature{dependent terms are included in the regression models of the gage outputs. They are the temperature difference itself and the square of the temperature difference. Simulated temperature{dependent data obtained from Triumph Aerospace's 2013 calibration of NASA's ARC-30K five component semi{span balance is used to illustrate the application of the improved approach.

  12. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  13. Integrated Control Using the SOFFT Control Structure

    NASA Technical Reports Server (NTRS)

    Halyo, Nesim

    1996-01-01

    The need for integrated/constrained control systems has become clearer as advanced aircraft introduced new coupled subsystems such as new propulsion subsystems with thrust vectoring and new aerodynamic designs. In this study, we develop an integrated control design methodology which accomodates constraints among subsystem variables while using the Stochastic Optimal Feedforward/Feedback Control Technique (SOFFT) thus maintaining all the advantages of the SOFFT approach. The Integrated SOFFT Control methodology uses a centralized feedforward control and a constrained feedback control law. The control thus takes advantage of the known coupling among the subsystems while maintaining the identity of subsystems for validation purposes and the simplicity of the feedback law to understand the system response in complicated nonlinear scenarios. The Variable-Gain Output Feedback Control methodology (including constant gain output feedback) is extended to accommodate equality constraints. A gain computation algorithm is developed. The designer can set the cross-gains between two variables or subsystems to zero or another value and optimize the remaining gains subject to the constraint. An integrated control law is designed for a modified F-15 SMTD aircraft model with coupled airframe and propulsion subsystems using the Integrated SOFFT Control methodology to produce a set of desired flying qualities.

  14. Low rank approach to computing first and higher order derivatives using automatic differentiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.

    2012-07-01

    This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computingmore » derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)« less

  15. A physics-based probabilistic forecasting model for rainfall-induced shallow landslides at regional scale

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojie; Zhao, Luqiang; Delgado-Tellez, Ricardo; Bao, Hongjun

    2018-03-01

    Conventional outputs of physics-based landslide forecasting models are presented as deterministic warnings by calculating the safety factor (Fs) of potentially dangerous slopes. However, these models are highly dependent on variables such as cohesion force and internal friction angle which are affected by a high degree of uncertainty especially at a regional scale, resulting in unacceptable uncertainties of Fs. Under such circumstances, the outputs of physical models are more suitable if presented in the form of landslide probability values. In order to develop such models, a method to link the uncertainty of soil parameter values with landslide probability is devised. This paper proposes the use of Monte Carlo methods to quantitatively express uncertainty by assigning random values to physical variables inside a defined interval. The inequality Fs < 1 is tested for each pixel in n simulations which are integrated in a unique parameter. This parameter links the landslide probability to the uncertainties of soil mechanical parameters and is used to create a physics-based probabilistic forecasting model for rainfall-induced shallow landslides. The prediction ability of this model was tested in a case study, in which simulated forecasting of landslide disasters associated with heavy rainfalls on 9 July 2013 in the Wenchuan earthquake region of Sichuan province, China, was performed. The proposed model successfully forecasted landslides in 159 of the 176 disaster points registered by the geo-environmental monitoring station of Sichuan province. Such testing results indicate that the new model can be operated in a highly efficient way and show more reliable results, attributable to its high prediction accuracy. Accordingly, the new model can be potentially packaged into a forecasting system for shallow landslides providing technological support for the mitigation of these disasters at regional scale.

  16. Neural Network and Regression Soft Model Extended for PAX-300 Aircraft Engine

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Hopkins, Dale A.

    2002-01-01

    In fiscal year 2001, the neural network and regression capabilities of NASA Glenn Research Center's COMETBOARDS design optimization testbed were extended to generate approximate models for the PAX-300 aircraft engine. The analytical model of the engine is defined through nine variables: the fan efficiency factor, the low pressure of the compressor, the high pressure of the compressor, the high pressure of the turbine, the low pressure of the turbine, the operating pressure, and three critical temperatures (T(sub 4), T(sub vane), and T(sub metal)). Numerical Propulsion System Simulation (NPSS) calculations of the specific fuel consumption (TSFC), as a function of the variables can become time consuming, and numerical instabilities can occur during these design calculations. "Soft" models can alleviate both deficiencies. These approximate models are generated from a set of high-fidelity input-output pairs obtained from the NPSS code and a design of the experiment strategy. A neural network and a regression model with 45 weight factors were trained for the input/output pairs. Then, the trained models were validated through a comparison with the original NPSS code. Comparisons of TSFC versus the operating pressure and of TSFC versus the three temperatures (T(sub 4), T(sub vane), and T(sub metal)) are depicted in the figures. The overall performance was satisfactory for both the regression and the neural network model. The regression model required fewer calculations than the neural network model, and it produced marginally superior results. Training the approximate methods is time consuming. Once trained, the approximate methods generated the solution with only a trivial computational effort, reducing the solution time from hours to less than a minute.

  17. Global sensitivity analysis of a local water balance model predicting evaporation, water yield and drought

    NASA Astrophysics Data System (ADS)

    Speich, Matthias; Zappa, Massimiliano; Lischke, Heike

    2017-04-01

    Evaporation and transpiration affect both catchment water yield and the growing conditions for vegetation. They are driven by climate, but also depend on vegetation, soil and land surface properties. In hydrological and land surface models, these properties may be included as constant parameters, or as state variables. Often, little is known about the effect of these variables on model outputs. In the present study, the effect of surface properties on evaporation was assessed in a global sensitivity analysis. To this effect, we developed a simple local water balance model combining state-of-the-art process formulations for evaporation, transpiration and soil water balance. The model is vertically one-dimensional, and the relative simplicity of its process formulations makes it suitable for integration in a spatially distributed model at regional scale. The main model outputs are annual total evaporation (TE, i.e. the sum of transpiration, soil evaporation and interception), and a drought index (DI), which is based on the ratio of actual and potential transpiration. This index represents the growing conditions for forest trees. The sensitivity analysis was conducted in two steps. First, a screening analysis was applied to identify unimportant parameters out of an initial set of 19 parameters. In a second step, a statistical meta-model was applied to a sample of 800 model runs, in which the values of the important parameters were varied. Parameter effect and interactions were analyzed with effects plots. The model was driven with forcing data from ten meteorological stations in Switzerland, representing a wide range of precipitation regimes across a strong temperature gradient. Of the 19 original parameters, eight were identified as important in the screening analysis. Both steps highlighted the importance of Plant Available Water Capacity (AWC) and Leaf Area Index (LAI). However, their effect varies greatly across stations. For example, while a transition from a sparse to a closed forest canopy has almost no effect on annual TE at warm and dry sites, it increases TE by up to 100 mm/year at cold-humid and warm-humid sites. Further parameters of importance describe infiltration, as well as canopy resistance and its response to environmental variables. This study offers insights for future development of hydrological and ecohydrological models. First, it shows that although local water balance is primarily controlled by climate, the vegetation and soil parameters may have a large impact on the outputs. Second, it indicates that modeling studies should prioritize a realistic parameterization of LAI and AWC, while other parameters may be set to fixed values. Third, it illustrates to which extent parameter effect and interactions depend on local climate.

  18. Measuring the efficiency of the Greek rural primary health care using a restricted DEA model; the case of southern and western Greece.

    PubMed

    Oikonomou, Nikolaos; Tountas, Yannis; Mariolis, Argiris; Souliotis, Kyriakos; Athanasakis, Kostas; Kyriopoulos, John

    2016-12-01

    This is a study to measure the efficiency of the rural Health Centres (HCs) and their Regional Surgeries (RSs) of the 6th Health Prefecture (HP) of Greece, which covers Southern and Western Greece. Data Envelopment Analysis (DEA) was applied under Constant and Variable Returns to Scale, using a weight-restricted, output-oriented model, to calculate pure technical efficiency (PΤΕ), scale efficiency (SE) and total technical efficiency (TE). The selection of inputs, outputs and their relative weights in the model was based on two consecutive consensus panels of experts on Primary Health Care (PHC). Medical personnel, nursing personnel and technological equipment were chosen as inputs and were attributed appropriate weight restrictions. Acute, chronic and preventive consultations where chosen as outputs; each output was constructed by smaller subcategories of different relative importance. Data were collected through a questionnaire sent to all HCs of the covered area. From the 42 HCs which provided complete data, the study identified 9 as technical efficient, 5 as scale efficient and 2 as total efficient. The mean TE, PTE and SE scores of the HCs of the 6th Health Prefecture were 0.57, 0.67 and 0.87, respectively. The results demonstrate noteworthy variation in efficiency in the productive process of the HCs of Southern and Western Greece. The dominant form of inefficiency was technical inefficiency. The HCs of the 6th HP can theoretically produce 33 % more output on average, using their current production factors. These results indicated potential for considerable efficiency improvement in most rural health care units. Emphasis on prevention and chronic disease management, as well as wider structural and organisational reforms, are discussed from the viewpoint of how to increase efficiency.

  19. A national study of efficiency for dialysis centers: an examination of market competition and facility characteristics for production of multiple dialysis outputs.

    PubMed

    Ozgen, Hacer; Ozcan, Yasar A

    2002-06-01

    To examine market competition and facility characteristics that can be related to technical efficiency in the production of multiple dialysis outputs from the perspective of the industrial organization model. Freestanding dialysis facilities that operated in 1997 submitted cost report fonns to the Health Care Financing Administration (HCFA), and offered all three outputs--outpatient dialysis, dialysis training, and home program dialysis. The Independent Renal Facility Cost Report Data file (IRFCRD) from HCFA was utilized to obtain information on output and input variables and market and facility features for 791 multiple-output facilities. Information regarding population characteristics was obtained from the Area Resources File. Cross-sectional data for the year 1997 were utilized to obtain facility-specific technical efficiency scores estimated through Data Envelopment Analysis (DEA). A binary variable of efficiency status was then regressed against its market and facility characteristics and control factors in a multivariate logistic regression analysis. The majority of the facilities in the sample are functioning technically inefficiently. Neither the intensity of market competition nor a policy of dialyzer reuse has a significant effect on the facilities' efficiency. Technical efficiency is significantly associated, however, with type of ownership, with the interaction between the market concentration of for-profits and ownership type, and with affiliations with chains of different sizes. Nonprofit and government-owned Facilities are more likely than their for-profit counterparts to become inefficient producers of renal dialysis outputs. On the other hand, that relationship between ownership form and efficiency is reversed as the market concentration of for-profits in a given market increases. Facilities that are members of large chains are more likely to be technically inefficient. Facilities do not appear to benefit from joint production of a variety of dialysis outputs, which may explain the ongoing tendency toward single-output production. Ownership form does make a positive difference in production efficiency, but only in local markets where competition exists between nonprofit and for-profit facilities. The increasing inefficiency associated with membership in large chains suggests that the growing consolidation in the dialysis industry may not, in fact, be the strategy for attaining more technical efficiency in the production of multiple dialysis outputs.

  20. Modelling Freshwater Resources at the Global Scale: Challenges and Prospects

    NASA Technical Reports Server (NTRS)

    Doll, Petra; Douville, Herve; Guntner, Andreas; Schmied, Hannes Muller; Wada, Yoshihide

    2015-01-01

    Quantification of spatially and temporally resolved water flows and water storage variations for all land areas of the globe is required to assess water resources, water scarcity and flood hazards, and to understand the Earth system. This quantification is done with the help of global hydrological models (GHMs). What are the challenges and prospects in the development and application of GHMs? Seven important challenges are presented. (1) Data scarcity makes quantification of human water use difficult even though significant progress has been achieved in the last decade. (2) Uncertainty of meteorological input data strongly affects model outputs. (3) The reaction of vegetation to changing climate and CO2 concentrations is uncertain and not taken into account in most GHMs that serve to estimate climate change impacts. (4) Reasons for discrepant responses of GHMs to changing climate have yet to be identified. (5) More accurate estimates of monthly time series of water availability and use are needed to provide good indicators of water scarcity. (6) Integration of gradient-based groundwater modelling into GHMs is necessary for a better simulation of groundwater-surface water interactions and capillary rise. (7) Detection and attribution of human interference with freshwater systems by using GHMs are constrained by data of insufficient quality but also GHM uncertainty itself. Regarding prospects for progress, we propose to decrease the uncertainty of GHM output by making better use of in situ and remotely sensed observations of output variables such as river discharge or total water storage variations by multi-criteria validation, calibration or data assimilation. Finally, we present an initiative that works towards the vision of hyper resolution global hydrological modelling where GHM outputs would be provided at a 1-km resolution with reasonable accuracy.

  1. Experimental studies of a continuous-wave HF(DF) confocal unstable resonator. Interim report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chodzko, R.A.; Cross, E.F.; Durran, D.A.

    1976-05-03

    A series of experiments were performed on a continuous-wave HF(DF) multiline edge-coupled confocal unstable resonator at The Aerospace Corporation MESA facility. Experimental techniques were developed to measure remotely (from a blockhouse) the output power, the near-field intensity distribution, the spatially resolved spectral content of the near field, and the far-field power distribution. A new technique in which a variable aperture calorimeter absorbing scraper (VACAS) was used for measuring the continuous-wave output power from an unstable resonator with variable-mode geometry and without the use of an output coupling mirror was developed. (GRA)

  2. Multivariate statistical analysis of a high rate biofilm process treating kraft mill bleach plant effluent.

    PubMed

    Goode, C; LeRoy, J; Allen, D G

    2007-01-01

    This study reports on a multivariate analysis of the moving bed biofilm reactor (MBBR) wastewater treatment system at a Canadian pulp mill. The modelling approach involved a data overview by principal component analysis (PCA) followed by partial least squares (PLS) modelling with the objective of explaining and predicting changes in the BOD output of the reactor. Over two years of data with 87 process measurements were used to build the models. Variables were collected from the MBBR control scheme as well as upstream in the bleach plant and in digestion. To account for process dynamics, a variable lagging approach was used for variables with significant temporal correlations. It was found that wood type pulped at the mill was a significant variable governing reactor performance. Other important variables included flow parameters, faults in the temperature or pH control of the reactor, and some potential indirect indicators of biomass activity (residual nitrogen and pH out). The most predictive model was found to have an RMSEP value of 606 kgBOD/d, representing a 14.5% average error. This was a good fit, given the measurement error of the BOD test. Overall, the statistical approach was effective in describing and predicting MBBR treatment performance.

  3. Technical note: Bayesian calibration of dynamic ruminant nutrition models.

    PubMed

    Reed, K F; Arhonditsis, G B; France, J; Kebreab, E

    2016-08-01

    Mechanistic models of ruminant digestion and metabolism have advanced our understanding of the processes underlying ruminant animal physiology. Deterministic modeling practices ignore the inherent variation within and among individual animals and thus have no way to assess how sources of error influence model outputs. We introduce Bayesian calibration of mathematical models to address the need for robust mechanistic modeling tools that can accommodate error analysis by remaining within the bounds of data-based parameter estimation. For the purpose of prediction, the Bayesian approach generates a posterior predictive distribution that represents the current estimate of the value of the response variable, taking into account both the uncertainty about the parameters and model residual variability. Predictions are expressed as probability distributions, thereby conveying significantly more information than point estimates in regard to uncertainty. Our study illustrates some of the technical advantages of Bayesian calibration and discusses the future perspectives in the context of animal nutrition modeling. Copyright © 2016 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  4. Adventures in holistic ecosystem modelling: the cumberland basin ecosystem model

    NASA Astrophysics Data System (ADS)

    Gordon, D. C.; Keizer, P. D.; Daborn, G. R.; Schwinghamer, P.; Silvert, W. L.

    A holistic ecosystem model has been developed for the Cumberland Basin, a turbid macrotidal estuary at the head of Canada's Bay of Fundy. The model was constructed as a group exercise involving several dozen scientists. Philosophy of approach and methods were patterned after the BOEDE Ems-Dollard modelling project. The model is one-dimensional, has 3 compartments and 3 boundaries, and is composed of 3 separate submodels (physical, pelagic and benthic). The 28 biological state variables cover the complete estuarine ecosystem and represent broad functional groups of organisms based on trophic relationships. Although still under development and not yet validated, the model has been verified and has reached the stage where most state variables provide reasonable output. The modelling process has stimulated interdisciplinary discussion, identified important data gaps and produced a quantitative tool which can be used to examine ecological hypotheses and determine critical environmental processes. As a result, Canadian scientists have a much better understanding of the Cumberland Basin ecosystem and are better able to provide competent advice on environmental management.

  5. Water quality modelling of an impacted semi-arid catchment using flow data from the WEAP model

    NASA Astrophysics Data System (ADS)

    Slaughter, Andrew R.; Mantel, Sukhmani K.

    2018-04-01

    The continuous decline in water quality in many regions is forcing a shift from quantity-based water resources management to a greater emphasis on water quality management. Water quality models can act as invaluable tools as they facilitate a conceptual understanding of processes affecting water quality and can be used to investigate the water quality consequences of management scenarios. In South Africa, the Water Quality Systems Assessment Model (WQSAM) was developed as a management-focussed water quality model that is relatively simple to be able to utilise the small amount of available observed data. Importantly, WQSAM explicitly links to systems (yield) models routinely used in water resources management in South Africa by using their flow output to drive water quality simulations. Although WQSAM has been shown to be able to represent the variability of water quality in South African rivers, its focus on management from a South African perspective limits its use to within southern African regions for which specific systems model setups exist. Facilitating the use of WQSAM within catchments outside of southern Africa and within catchments for which these systems model setups to not exist would require WQSAM to be able to link to a simple-to-use and internationally-applied systems model. One such systems model is the Water Evaluation and Planning (WEAP) model, which incorporates a rainfall-runoff component (natural hydrology), and reservoir storage, return flows and abstractions (systems modelling), but within which water quality modelling facilities are rudimentary. The aims of the current study were therefore to: (1) adapt the WQSAM model to be able to use as input the flow outputs of the WEAP model and; (2) provide an initial assessment of how successful this linkage was by application of the WEAP and WQSAM models to the Buffalo River for historical conditions; a small, semi-arid and impacted catchment in the Eastern Cape of South Africa. The simulations of the two models were compared to the available observed data, with the initial focus within WQSAM on a simulation of instream total dissolved solids (TDS) and nutrient concentrations. The WEAP model was able to adequately simulate flow in the Buffalo River catchment, with consideration of human inputs and outputs. WQSAM was adapted to successfully take as input the flow output of the WEAP model, and the simulations of nutrients by WQSAM provided a good representation of the variability of observed nutrient concentrations in the catchment. This study showed that the WQSAM model is able to accept flow inputs from the WEAP model, and that this approach is able to provide satisfactory estimates of both flow and water quality for a small, semi-arid and impacted catchment. It is hoped that this research will encourage the application of WQSAM to an increased number of catchments within southern Africa and beyond.

  6. Optimal Solar PV Arrays Integration for Distributed Generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Omitaomu, Olufemi A; Li, Xueping

    2012-01-01

    Solar photovoltaic (PV) systems hold great potential for distributed energy generation by installing PV panels on rooftops of residential and commercial buildings. Yet challenges arise along with the variability and non-dispatchability of the PV systems that affect the stability of the grid and the economics of the PV system. This paper investigates the integration of PV arrays for distributed generation applications by identifying a combination of buildings that will maximize solar energy output and minimize system variability. Particularly, we propose mean-variance optimization models to choose suitable rooftops for PV integration based on Markowitz mean-variance portfolio selection model. We further introducemore » quantity and cardinality constraints to result in a mixed integer quadratic programming problem. Case studies based on real data are presented. An efficient frontier is obtained for sample data that allows decision makers to choose a desired solar energy generation level with a comfortable variability tolerance level. Sensitivity analysis is conducted to show the tradeoffs between solar PV energy generation potential and variability.« less

  7. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  8. Analysis of Neuronal Spike Trains, Deconstructed

    PubMed Central

    Aljadeff, Johnatan; Lansdell, Benjamin J.; Fairhall, Adrienne L.; Kleinfeld, David

    2016-01-01

    As information flows through the brain, neuronal firing progresses from encoding the world as sensed by the animal to driving the motor output of subsequent behavior. One of the more tractable goals of quantitative neuroscience is to develop predictive models that relate the sensory or motor streams with neuronal firing. Here we review and contrast analytical tools used to accomplish this task. We focus on classes of models in which the external variable is compared with one or more feature vectors to extract a low-dimensional representation, the history of spiking and other variables are potentially incorporated, and these factors are nonlinearly transformed to predict the occurrences of spikes. We illustrate these techniques in application to datasets of different degrees of complexity. In particular, we address the fitting of models in the presence of strong correlations in the external variable, as occurs in natural sensory stimuli and in movement. Spectral correlation between predicted and measured spike trains is introduced to contrast the relative success of different methods. PMID:27477016

  9. Control Augmented Structural Synthesis

    NASA Technical Reports Server (NTRS)

    Lust, Robert V.; Schmit, Lucien A.

    1988-01-01

    A methodology for control augmented structural synthesis is proposed for a class of structures which can be modeled as an assemblage of frame and/or truss elements. It is assumed that both the plant (structure) and the active control system dynamics can be adequately represented with a linear model. The structural sizing variables, active control system feedback gains and nonstructural lumped masses are treated simultaneously as independent design variables. Design constraints are imposed on static and dynamic displacements, static stresses, actuator forces and natural frequencies to ensure acceptable system behavior. Multiple static and dynamic loading conditions are considered. Side constraints imposed on the design variables protect against the generation of unrealizable designs. While the proposed approach is fundamentally more general, here the methodology is developed and demonstrated for the case where: (1) the dynamic loading is harmonic and thus the steady state response is of primary interest; (2) direct output feedback is used for the control system model; and (3) the actuators and sensors are collocated.

  10. Relating Neuronal to Behavioral Performance: Variability of Optomotor Responses in the Blowfly

    PubMed Central

    Rosner, Ronny; Warzecha, Anne-Kathrin

    2011-01-01

    Behavioral responses of an animal vary even when they are elicited by the same stimulus. This variability is due to stochastic processes within the nervous system and to the changing internal states of the animal. To what extent does the variability of neuronal responses account for the overall variability at the behavioral level? To address this question we evaluate the neuronal variability at the output stage of the blowfly's (Calliphora vicina) visual system by recording from motion-sensitive interneurons mediating head optomotor responses. By means of a simple modelling approach representing the sensory-motor transformation, we predict head movements on the basis of the recorded responses of motion-sensitive neurons and compare the variability of the predicted head movements with that of the observed ones. Large gain changes of optomotor head movements have previously been shown to go along with changes in the animals' activity state. Our modelling approach substantiates that these gain changes are imposed downstream of the motion-sensitive neurons of the visual system. Moreover, since predicted head movements are clearly more reliable than those actually observed, we conclude that substantial variability is introduced downstream of the visual system. PMID:22066014

  11. Unleashing spatially distributed ecohydrology modeling using Big Data tools

    NASA Astrophysics Data System (ADS)

    Miles, B.; Idaszak, R.

    2015-12-01

    Physically based spatially distributed ecohydrology models are useful for answering science and management questions related to the hydrology and biogeochemistry of prairie, savanna, forested, as well as urbanized ecosystems. However, these models can produce hundreds of gigabytes of spatial output for a single model run over decadal time scales when run at regional spatial scales and moderate spatial resolutions (~100-km2+ at 30-m spatial resolution) or when run for small watersheds at high spatial resolutions (~1-km2 at 3-m spatial resolution). Numerical data formats such as HDF5 can store arbitrarily large datasets. However even in HPC environments, there are practical limits on the size of single files that can be stored and reliably backed up. Even when such large datasets can be stored, querying and analyzing these data can suffer from poor performance due to memory limitations and I/O bottlenecks, for example on single workstations where memory and bandwidth are limited, or in HPC environments where data are stored separately from computational nodes. The difficulty of storing and analyzing spatial data from ecohydrology models limits our ability to harness these powerful tools. Big Data tools such as distributed databases have the potential to surmount the data storage and analysis challenges inherent to large spatial datasets. Distributed databases solve these problems by storing data close to computational nodes while enabling horizontal scalability and fault tolerance. Here we present the architecture of and preliminary results from PatchDB, a distributed datastore for managing spatial output from the Regional Hydro-Ecological Simulation System (RHESSys). The initial version of PatchDB uses message queueing to asynchronously write RHESSys model output to an Apache Cassandra cluster. Once stored in the cluster, these data can be efficiently queried to quickly produce both spatial visualizations for a particular variable (e.g. maps and animations), as well as point time series of arbitrary variables at arbitrary points in space within a watershed or river basin. By treating ecohydrology modeling as a Big Data problem, we hope to provide a platform for answering transformative science and management questions related to water quantity and quality in a world of non-stationary climate.

  12. Multi-decadal Variability of the Wind Power Output

    NASA Astrophysics Data System (ADS)

    Kirchner Bossi, Nicolas; García-Herrera, Ricardo; Prieto, Luis; Trigo, Ricardo M.

    2014-05-01

    The knowledge of the long-term wind power variability is essential to provide a realistic outlook on the power output during the lifetime of a planned wind power project. In this work, the Power Output (Po) of a market wind turbine is simulated with a daily resolution for the period 1871-2009 at two different locations in Spain, one at the Central Iberian Plateau and another at the Gibraltar Strait Area. This is attained through a statistical downscaling of the daily wind conditions. It implements a Greedy Algorithm as classificator of a geostrophic-based wind predictor, which is derived by considering the SLP daily field from the 56 ensemble members of the longest homogeneous reanalysis available (20CR, 1871-2009). For calibration and validation purposes we use 10 years of wind observations (the predictand) at both sites. As a result, a series of 139 annual wind speed Probability Density Functions (PDF) are obtained, with a good performance in terms of wind speed uncertainty reduction (average daily wind speed MAE=1.48 m/s). The obtained centennial series allow to investigate the multi-decadal variability of wind power from different points of view. Significant periodicities around the 25-yr frequency band, as well as long-term linear trends are detected at both locations. In addition, a negative correlation is found between annual Po at both locations, evidencing the differences in the dynamical mechanisms ruling them (and possible complementary behavior). Furthermore, the impact that the three leading large-scale circulation patterns over Iberia (NAO, EA and SCAND) exert over wind power output is evaluated. Results show distinct (and non-stationary) couplings to these forcings depending on the geographical position and season or month. Moreover, significant non-stationary correlations are observed with the slow varying Atlantic Multidecadal Oscillation (AMO) index for both case studies. Finally, an empirical relationship is explored between the annual Po and the parameters of the Weibull PDF. This allowed us to derive a linear model to estimate the annual power output from those parameters, which results especially useful when no wind power data is available.

  13. Models of Acoustic Deception and ASW Support in a Task Group Operating Area

    DTIC Science & Technology

    1974-12-01

    submarine case is: PK+FK + PK+E0 exp (- t(PK +SV - PK + E0 PK + Eo ep -T + h-p(1 -K) + h (Il-Eo) where K = I 6o = I - (1-6o)ao Ko = 1 -ao The equation...Table B-I (Concluded) Mode l Program Model Program Parameter Variable Parameter Variable ORI: Output: T TO T T0 p R1O A All 0 SO A AL 0 S K SKO 0 5DO K...O = (1.0--0 )’BB 01110 SK = RHO#(1.0-S),’BB 01120 EX = 1.0 - EXP(-T/TT: 01130 PS(I) = 1.0 - AH*EX 01140 PA(I) = AL.EX/PS(I) 01150 PKO(1) = . KO *EX/PS(I

  14. A comparison of river discharge calculated by using a regional climate model output with different reanalysis datasets in 1980s and 1990s

    NASA Astrophysics Data System (ADS)

    Ma, X.; Yoshikane, T.; Hara, M.; Adachi, S. A.; Wakazuki, Y.; Kawase, H.; Kimura, F.

    2014-12-01

    To check the influence of boundary input data on a modeling result, we had a numerical investigation of river discharge by using runoff data derived by a regional climate model with a 4.5-km resolution as input data to a hydrological model. A hindcast experiment, which to reproduce the current climate was carried out for the two decades, 1980s and 1990s. We used the Advanced Research WRF (ARW) (ver. 3.2.1) with a two-way nesting technique and the WRF single-moment 6-class microphysics scheme. Noah-LSM is adopted to simulate the land surface process. The NCEP/NCAR and ERA-Interim 6-hourly reanalysis datasets were used as the lateral boundary condition for the runs, respectively. The output variables used for river discharge simulation from the WRF model were underground runoff and surface runoff. Four rivers (Mogami, Agano, Jinzu and Tone) were selected in this study. The results showed that the characteristic of river discharge in seasonal variation could be represented and there were overestimated compared with measured one.

  15. Uncertainty in Ecohydrological Modeling in an Arid Region Determined with Bayesian Methods

    PubMed Central

    Yang, Junjun; He, Zhibin; Du, Jun; Chen, Longfei; Zhu, Xi

    2016-01-01

    In arid regions, water resources are a key forcing factor in ecosystem circulation, and soil moisture is the critical link that constrains plant and animal life on the soil surface and underground. Simulation of soil moisture in arid ecosystems is inherently difficult due to high variability. We assessed the applicability of the process-oriented CoupModel for forecasting of soil water relations in arid regions. We used vertical soil moisture profiling for model calibration. We determined that model-structural uncertainty constituted the largest error; the model did not capture the extremes of low soil moisture in the desert-oasis ecotone (DOE), particularly below 40 cm soil depth. Our results showed that total uncertainty in soil moisture prediction was improved when input and output data, parameter value array, and structure errors were characterized explicitly. Bayesian analysis was applied with prior information to reduce uncertainty. The need to provide independent descriptions of uncertainty analysis (UA) in the input and output data was demonstrated. Application of soil moisture simulation in arid regions will be useful for dune-stabilization and revegetation efforts in the DOE. PMID:26963523

  16. Detection of "noisy" chaos in a time series

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Kanters, J. K.; Cohen, R. J.; Holstein-Rathlou, N. H.

    1997-01-01

    Time series from biological system often displays fluctuations in the measured variables. Much effort has been directed at determining whether this variability reflects deterministic chaos, or whether it is merely "noise". The output from most biological systems is probably the result of both the internal dynamics of the systems, and the input to the system from the surroundings. This implies that the system should be viewed as a mixed system with both stochastic and deterministic components. We present a method that appears to be useful in deciding whether determinism is present in a time series, and if this determinism has chaotic attributes. The method relies on fitting a nonlinear autoregressive model to the time series followed by an estimation of the characteristic exponents of the model over the observed probability distribution of states for the system. The method is tested by computer simulations, and applied to heart rate variability data.

  17. Prosthetic Leg Control in the Nullspace of Human Interaction.

    PubMed

    Gregg, Robert D; Martin, Anne E

    2016-07-01

    Recent work has extended the control method of virtual constraints, originally developed for autonomous walking robots, to powered prosthetic legs for lower-limb amputees. Virtual constraints define desired joint patterns as functions of a mechanical phasing variable, which are typically enforced by torque control laws that linearize the output dynamics associated with the virtual constraints. However, the output dynamics of a powered prosthetic leg generally depend on the human interaction forces, which must be measured and canceled by the feedback linearizing control law. This feedback requires expensive multi-axis load cells, and actively canceling the interaction forces may minimize the human's influence over the prosthesis. To address these limitations, this paper proposes a method for projecting virtual constraints into the nullspace of the human interaction terms in the output dynamics. The projected virtual constraints naturally render the output dynamics invariant with respect to the human interaction forces, which instead enter into the internal dynamics of the partially linearized prosthetic system. This method is illustrated with simulations of a transfemoral amputee model walking with a powered knee-ankle prosthesis that is controlled via virtual constraints with and without the proposed projection.

  18. Modeling seasonal variability of carbonate system parameters at the sediment -water interface in the Baltic Sea (Gdansk Deep)

    NASA Astrophysics Data System (ADS)

    Protsenko, Elizaveta; Yakubov, Shamil; Lessin, Gennady; Yakushev, Evgeniy; Sokołowski, Adam

    2017-04-01

    A one-dimensional fully-coupled benthic pelagic biogeochemical model BROM (Bottom RedOx Model) was used for simulations of seasonal variability of biogeochemical parameters in the upper sediment, Bottom Boundary Layer and the water column in the Gdansk Deep of the Baltic Sea. This model represents key biogeochemical processes of transformation of C, N, P, Si, O, S, Mn, Fe and the processes of vertical transport in the water column and the sediments. The hydrophysical block of BROM was forced by the output calculated with model GETM (General Estuarine Transport Model). In this study we focused on parameters of carbonate system at Baltic Sea, and mainly on their distributions near the sea-water interface. For validating of BROM we used field data (concentrations of main nutrients at water column and porewater of upper sediment) from the Gulf of Gdansk. The model allowed us to simulate the baseline ranges of seasonal variability of pH, Alkalinity, TIC and calcite/aragonite saturation as well as vertical fluxes of carbon in a region potentially selected for the CCS storage. This work was supported by project EEA CO2MARINE and STEMM-CCS.

  19. Learning Physics-based Models in Hydrology under the Framework of Generative Adversarial Networks

    NASA Astrophysics Data System (ADS)

    Karpatne, A.; Kumar, V.

    2017-12-01

    Generative adversarial networks (GANs), that have been highly successful in a number of applications involving large volumes of labeled and unlabeled data such as computer vision, offer huge potential for modeling the dynamics of physical processes that have been traditionally studied using simulations of physics-based models. While conventional physics-based models use labeled samples of input/output variables for model calibration (estimating the right parametric forms of relationships between variables) or data assimilation (identifying the most likely sequence of system states in dynamical systems), there is a greater opportunity to explore the full power of machine learning (ML) methods (e.g, GANs) for studying physical processes currently suffering from large knowledge gaps, e.g. ground-water flow. However, success in this endeavor requires a principled way of combining the strengths of ML methods with physics-based numerical models that are founded on a wealth of scientific knowledge. This is especially important in scientific domains like hydrology where the number of data samples is small (relative to Internet-scale applications such as image recognition where machine learning methods has found great success), and the physical relationships are complex (high-dimensional) and non-stationary. We will present a series of methods for guiding the learning of GANs using physics-based models, e.g., by using the outputs of physics-based models as input data to the generator-learner framework, and by using physics-based models as generators trained using validation data in the adversarial learning framework. These methods are being developed under the broad paradigm of theory-guided data science that we are developing to integrate scientific knowledge with data science methods for accelerating scientific discovery.

  20. Application of the Artificial Neural Network model for prediction of monthly Standardized Precipitation and Evapotranspiration Index using hydrometeorological parameters and climate indices in eastern Australia

    NASA Astrophysics Data System (ADS)

    Deo, Ravinesh C.; Şahin, Mehmet

    2015-07-01

    The forecasting of drought based on cumulative influence of rainfall, temperature and evaporation is greatly beneficial for mitigating adverse consequences on water-sensitive sectors such as agriculture, ecosystems, wildlife, tourism, recreation, crop health and hydrologic engineering. Predictive models of drought indices help in assessing water scarcity situations, drought identification and severity characterization. In this paper, we tested the feasibility of the Artificial Neural Network (ANN) as a data-driven model for predicting the monthly Standardized Precipitation and Evapotranspiration Index (SPEI) for eight candidate stations in eastern Australia using predictive variable data from 1915 to 2005 (training) and simulated data for the period 2006-2012. The predictive variables were: monthly rainfall totals, mean temperature, minimum temperature, maximum temperature and evapotranspiration, which were supplemented by large-scale climate indices (Southern Oscillation Index, Pacific Decadal Oscillation, Southern Annular Mode and Indian Ocean Dipole) and the Sea Surface Temperatures (Nino 3.0, 3.4 and 4.0). A total of 30 ANN models were developed with 3-layer ANN networks. To determine the best combination of learning algorithms, hidden transfer and output functions of the optimum model, the Levenberg-Marquardt and Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton backpropagation algorithms were utilized to train the network, tangent and logarithmic sigmoid equations used as the activation functions and the linear, logarithmic and tangent sigmoid equations used as the output function. The best ANN architecture had 18 input neurons, 43 hidden neurons and 1 output neuron, trained using the Levenberg-Marquardt learning algorithm using tangent sigmoid equation as the activation and output functions. An evaluation of the model performance based on statistical rules yielded time-averaged Coefficient of Determination, Root Mean Squared Error and the Mean Absolute Error ranging from 0.9945-0.9990, 0.0466-0.1117, and 0.0013-0.0130, respectively for individual stations. Also, the Willmott's Index of Agreement and the Nash-Sutcliffe Coefficient of Efficiency were between 0.932-0.959 and 0.977-0.998, respectively. When checked for the severity (S), duration (D) and peak intensity (I) of drought events determined from the simulated and observed SPEI, differences in drought parameters ranged from - 1.41-0.64%, - 2.17-1.92% and - 3.21-1.21%, respectively. Based on performance evaluation measures, we aver that the Artificial Neural Network model is a useful data-driven tool for forecasting monthly SPEI and its drought-related properties in the region of study.

Top