Sample records for input variable selection

  1. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  2. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  3. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  4. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  5. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  6. Harmonize input selection for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  7. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  8. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  9. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  10. MULTIPLIER CIRCUIT

    DOEpatents

    Thomas, R.E.

    1959-01-20

    An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses

  11. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  12. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  13. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    PubMed

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Analyses of the most influential factors for vibration monitoring of planetary power transmissions in pellet mills by adaptive neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban

    2017-01-01

    Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.

  16. A users' manual for MCPRAM (Monte Carlo PReprocessor for AMEER) and for the fuze options in AMEER (Aero Mechanical Equation Evaluation Routines)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaFarge, R.A.

    1990-05-01

    MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less

  17. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  18. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  19. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.

  20. Annual variability of PAH concentrations in the Potomac River watershed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maher, I.L.; Foster, G.D.

    1995-12-31

    Dynamics of organic contaminant transport in a large river system is influenced by annual variability in organic contaminant concentrations. Surface runoff and groundwater input control the flow of river waters. They are also the two major inputs of contaminants to river waters. The annual variability of contaminant concentrations in rivers may or may not represent similar trends to the flow changes of river waters. The purpose of the research is to define the annual variability in concentrations of polycyclic aromatic hydrocarbons (PAH) in riverine environment. To accomplish this, from March 1992 to March 1995 samples of Potomac River water weremore » collected monthly or bimonthly downstream of the Chesapeake Bay fall line (Chain Bridge) during base flow and main storm flow hydrologic conditions. Concentrations of selected PAHs were measured in the dissolved phase and the particulate phase via GC/MS. The study of the annual variability of PAH concentrations will be performed through comparisons of PAH concentrations seasonally, annually, and through study of PAH concentration river discharge dependency and rainfall dependency. For selected PAHs monthly and annual loadings will be estimated based on their measured concentrations and average daily river discharge. The monthly loadings of selected PAHs will be compared by seasons and annually.« less

  1. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  2. Rapid Elemental Analysis and Provenance Study of Blumea balsamifera DC Using Laser-Induced Breakdown Spectroscopy

    PubMed Central

    Liu, Xiaona; Zhang, Qiao; Wu, Zhisheng; Shi, Xinyuan; Zhao, Na; Qiao, Yanjiang

    2015-01-01

    Laser-induced breakdown spectroscopy (LIBS) was applied to perform a rapid elemental analysis and provenance study of Blumea balsamifera DC. Principal component analysis (PCA) and partial least squares discriminant analysis (PLS-DA) were implemented to exploit the multivariate nature of the LIBS data. Scores and loadings of computed principal components visually illustrated the differing spectral data. The PLS-DA algorithm showed good classification performance. The PLS-DA model using complete spectra as input variables had similar discrimination performance to using selected spectral lines as input variables. The down-selection of spectral lines was specifically focused on the major elements of B. balsamifera samples. Results indicated that LIBS could be used to rapidly analyze elements and to perform provenance study of B. balsamifera. PMID:25558999

  3. An evaluation of Bayesian techniques for controlling model complexity and selecting inputs in a neural network for short-term load forecasting.

    PubMed

    Hippert, Henrique S; Taylor, James W

    2010-04-01

    Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.

  4. Exploring objective climate classification for the Himalayan arc and adjacent regions using gridded data sources

    NASA Astrophysics Data System (ADS)

    Forsythe, N.; Blenkinsop, S.; Fowler, H. J.

    2015-05-01

    A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.

  5. Artificial Neural Network and Genetic Algorithm Hybrid Intelligence for Predicting Thai Stock Price Index Trend

    PubMed Central

    Boonjing, Veera; Intakosum, Sarun

    2016-01-01

    This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883

  6. Artificial Neural Network and Genetic Algorithm Hybrid Intelligence for Predicting Thai Stock Price Index Trend.

    PubMed

    Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun

    2016-01-01

    This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.

  7. Prediction of municipal solid waste generation using nonlinear autoregressive network.

    PubMed

    Younes, Mohammad K; Nopiah, Z M; Basri, N E Ahmad; Basri, H; Abushammala, Mohammed F M; Maulud, K N A

    2015-12-01

    Most of the developing countries have solid waste management problems. Solid waste strategic planning requires accurate prediction of the quality and quantity of the generated waste. In developing countries, such as Malaysia, the solid waste generation rate is increasing rapidly, due to population growth and new consumption trends that characterize society. This paper proposes an artificial neural network (ANN) approach using feedforward nonlinear autoregressive network with exogenous inputs (NARX) to predict annual solid waste generation in relation to demographic and economic variables like population number, gross domestic product, electricity demand per capita and employment and unemployment numbers. In addition, variable selection procedures are also developed to select a significant explanatory variable. The model evaluation was performed using coefficient of determination (R(2)) and mean square error (MSE). The optimum model that produced the lowest testing MSE (2.46) and the highest R(2) (0.97) had three inputs (gross domestic product, population and employment), eight neurons and one lag in the hidden layer, and used Fletcher-Powell's conjugate gradient as the training algorithm.

  8. A non-linear data mining parameter selection algorithm for continuous variables

    PubMed Central

    Razavi, Marianne; Brady, Sean

    2017-01-01

    In this article, we propose a new data mining algorithm, by which one can both capture the non-linearity in data and also find the best subset model. To produce an enhanced subset of the original variables, a preferred selection method should have the potential of adding a supplementary level of regression analysis that would capture complex relationships in the data via mathematical transformation of the predictors and exploration of synergistic effects of combined variables. The method that we present here has the potential to produce an optimal subset of variables, rendering the overall process of model selection more efficient. This algorithm introduces interpretable parameters by transforming the original inputs and also a faithful fit to the data. The core objective of this paper is to introduce a new estimation technique for the classical least square regression framework. This new automatic variable transformation and model selection method could offer an optimal and stable model that minimizes the mean square error and variability, while combining all possible subset selection methodology with the inclusion variable transformations and interactions. Moreover, this method controls multicollinearity, leading to an optimal set of explanatory variables. PMID:29131829

  9. Variable current speed controller for eddy current motors

    DOEpatents

    Gerth, H.L.; Bailey, J.M.; Casstevens, J.M.; Dixon, J.H.; Griffith, B.O.; Igou, R.E.

    1982-03-12

    A speed control system for eddy current motors is provided in which the current to the motor from a constant frequency power source is varied by comparing the actual motor speed signal with a setpoint speed signal to control the motor speed according to the selected setpoint speed. A three-phase variable voltage autotransformer is provided for controlling the voltage from a three-phase power supply. A corresponding plurality of current control resistors is provided in series with each phase of the autotransformer output connected to inputs of a three-phase motor. Each resistor is connected in parallel with a set of normally closed contacts of plurality of relays which are operated by control logic. A logic circuit compares the selected speed with the actual motor speed obtained from a digital tachometer monitoring the motor spindle speed and operated the relays to add or substract resistance equally in each phase of the motor input to vary the motor current to control the motor at the selected speed.

  10. Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.

    PubMed

    Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva

    2011-06-01

    The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.

  11. [Application of characteristic NIR variables selection in portable detection of soluble solids content of apple by near infrared spectroscopy].

    PubMed

    Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang

    2014-10-01

    In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.

  12. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  13. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  14. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  15. Variable Selection through Correlation Sifting

    NASA Astrophysics Data System (ADS)

    Huang, Jim C.; Jojic, Nebojsa

    Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.

  16. A study of commuter airline economics

    NASA Technical Reports Server (NTRS)

    Summerfield, J. R.

    1976-01-01

    Variables are defined and cost relationships developed that describe the direct and indirect operating costs of commuter airlines. The study focused on costs for new aircraft and new aircraft technology when applied to the commuter airline industry. With proper judgement and selection of input variables, the operating costs model was shown to be capable of providing economic insight into other commuter airline system evaluations.

  17. Temperature variability is a key component in accurately forecasting the effects of climate change on pest phenology.

    PubMed

    Merrill, Scott C; Peairs, Frank B

    2017-02-01

    Models describing the effects of climate change on arthropod pest ecology are needed to help mitigate and adapt to forthcoming changes. Challenges arise because climate data are at resolutions that do not readily synchronize with arthropod biology. Here we explain how multiple sources of climate and weather data can be synthesized to quantify the effects of climate change on pest phenology. Predictions of phenological events differ substantially between models that incorporate scale-appropriate temperature variability and models that do not. As an illustrative example, we predicted adult emergence of a pest of sunflower, the sunflower stem weevil Cylindrocopturus adspersus (LeConte). Predictions of the timing of phenological events differed by an average of 11 days between models with different temperature variability inputs. Moreover, as temperature variability increases, developmental rates accelerate. Our work details a phenological modeling approach intended to help develop tools to plan for and mitigate the effects of climate change. Results show that selection of scale-appropriate temperature data is of more importance than selecting a climate change emission scenario. Predictions derived without appropriate temperature variability inputs will likely result in substantial phenological event miscalculations. Additionally, results suggest that increased temperature instability will lead to accelerated pest development. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  18. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  19. Image Display and Manipulation System (IDAMS) program documentation, Appendixes A-D. [including routines, convolution filtering, image expansion, and fast Fourier transformation

    NASA Technical Reports Server (NTRS)

    Cecil, R. W.; White, R. A.; Szczur, M. R.

    1972-01-01

    The IDAMS Processor is a package of task routines and support software that performs convolution filtering, image expansion, fast Fourier transformation, and other operations on a digital image tape. A unique task control card for that program, together with any necessary parameter cards, selects each processing technique to be applied to the input image. A variable number of tasks can be selected for execution by including the proper task and parameter cards in the input deck. An executive maintains control of the run; it initiates execution of each task in turn and handles any necessary error processing.

  20. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  1. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    PubMed

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.

  2. Stochastic Modeling of Radioactive Material Releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason; Pope, Chad

    2015-09-01

    Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  3. ENSO detection and use to inform the operation of large scale water systems

    NASA Astrophysics Data System (ADS)

    Pham, Vuong; Giuliani, Matteo; Castelletti, Andrea

    2016-04-01

    El Nino Southern Oscillation (ENSO) is a large-scale, coupled ocean-atmosphere phenomenon occurring in the tropical Pacific Ocean, and is considered one of the most significant factors causing hydro-climatic anomalies throughout the world. Water systems operations could benefit from a better understanding of this global phenomenon, which has the potential for enhancing the accuracy and lead-time of long-range streamflow predictions. In turn, these are key to design interannual water transfers in large scale water systems to contrast increasingly frequent extremes induced by changing climate. Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we contribute a general framework relying on Input Variable Selection techniques for detecting ENSO teleconnection and using this information for improving water reservoir operations. Core of our procedure is the Iterative Input variable Selection (IIS) algorithm, which is employed to find the most relevant determinants of streamflow variability for deriving predictive models based on the selected inputs as well as to find the most valuable information for conditioning operating decisions. Our framework is applied to the multipurpose operations of the Hoa Binh reservoir in the Red River basin (Vietnam), taking into account hydropower production, water supply for irrigation, and flood mitigation during the monsoon season. Numerical results show that our framework is able to quantify the relationship between the ENSO fluctuations and the Red River basin hydrology. Moreover, we demonstrate that such ENSO teleconnection represents valuable information for improving the operations of Hoa Binh reservoir.

  4. A data mining framework for time series estimation.

    PubMed

    Hu, Xiao; Xu, Peng; Wu, Shaozhi; Asgari, Shadnaz; Bergsneider, Marvin

    2010-04-01

    Time series estimation techniques are usually employed in biomedical research to derive variables less accessible from a set of related and more accessible variables. These techniques are traditionally built from systems modeling approaches including simulation, blind decovolution, and state estimation. In this work, we define target time series (TTS) and its related time series (RTS) as the output and input of a time series estimation process, respectively. We then propose a novel data mining framework for time series estimation when TTS and RTS represent different sets of observed variables from the same dynamic system. This is made possible by mining a database of instances of TTS, its simultaneously recorded RTS, and the input/output dynamic models between them. The key mining strategy is to formulate a mapping function for each TTS-RTS pair in the database that translates a feature vector extracted from RTS to the dissimilarity between true TTS and its estimate from the dynamic model associated with the same TTS-RTS pair. At run time, a feature vector is extracted from an inquiry RTS and supplied to the mapping function associated with each TTS-RTS pair to calculate a dissimilarity measure. An optimal TTS-RTS pair is then selected by analyzing these dissimilarity measures. The associated input/output model of the selected TTS-RTS pair is then used to simulate the TTS given the inquiry RTS as an input. An exemplary implementation was built to address a biomedical problem of noninvasive intracranial pressure assessment. The performance of the proposed method was superior to that of a simple training-free approach of finding the optimal TTS-RTS pair by a conventional similarity-based search on RTS features. 2009 Elsevier Inc. All rights reserved.

  5. A technique for pole-zero placement for dual-input control systems. [computer simulation of CH-47 helicopter longitudinal dynamics

    NASA Technical Reports Server (NTRS)

    Reid, G. F.

    1976-01-01

    A technique is presented for determining state variable feedback gains that will place both the poles and zeros of a selected transfer function of a dual-input control system at pre-determined locations in the s-plane. Leverrier's algorithm is used to determine the numerator and denominator coefficients of the closed-loop transfer function as functions of the feedback gains. The values of gain that match these coefficients to those of a pre-selected model are found by solving two systems of linear simultaneous equations. The algorithm has been used in a computer simulation of the CH-47 helicopter to control longitudinal dynamics.

  6. Development of the TACOM (Tank Automotive Command) Thermal Imaging Model (TTIM). Volume 1. Technical Guide and User’s Manual.

    DTIC Science & Technology

    1984-12-01

    BLOCK DATA Default values for variables input by menus. LIBR Interface with frame I/O routines. SNSR Interface with sensor routines. ATMOS Interface with...Routines Included in Frame I/O Interface Routine Description LIBR Selects options for input or output to a data library. FRREAD Reads frame from file and/or...Layer", Journal of Applied Meteorology 20, pp. 242-249, March 1981. 15 L.J. Harding, Numerical Analysis and Applications Software Abstracts, Computing

  7. Methane Dual Expander Aerospike Nozzle Rocket Engine

    DTIC Science & Technology

    2012-03-22

    include O/F ratio, thrust, and engine geometry. After thousands of iterations over the design space , the selected MDEAN engine concept has 349 s of...35 Table 7: Fluid Property Table Supported Parameters...44 Table 8: Fluid Property Input Data Independent Variable Ranges. ................................. 46 Table 9

  8. UNCERTAINTY ANALYSIS IN WATER QUALITY MODELING USING QUAL2E

    EPA Science Inventory

    A strategy for incorporating uncertainty analysis techniques (sensitivity analysis, first order error analysis, and Monte Carlo simulation) into the mathematical water quality model QUAL2E is described. The model, named QUAL2E-UNCAS, automatically selects the input variables or p...

  9. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    USGS Publications Warehouse

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.

  10. An evaluation of unsupervised and supervised learning algorithms for clustering landscape types in the United States

    USGS Publications Warehouse

    Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2016-01-01

    Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.

  11. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  12. Reconfigurable pipelined processor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saccardi, R.J.

    1989-09-19

    This patent describes a reconfigurable pipelined processor for processing data. It comprises: a plurality of memory devices for storing bits of data; a plurality of arithmetic units for performing arithmetic functions with the data; cross bar means for connecting the memory devices with the arithmetic units for transferring data therebetween; at least one counter connected with the cross bar means for providing a source of addresses to the memory devices; at least one variable tick delay device connected with each of the memory devices and arithmetic units; and means for providing control bits to the variable tick delay device formore » variably controlling the input and output operations thereof to selectively delay the memory devices and arithmetic units to align the data for processing in a selected sequence.« less

  13. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  14. Analysis on electronic control unit of continuously variable transmission

    NASA Astrophysics Data System (ADS)

    Cao, Shuanggui

    Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.

  15. Aircraft signal definition for flight safety system monitoring system

    NASA Technical Reports Server (NTRS)

    Gibbs, Michael (Inventor); Omen, Debi Van (Inventor)

    2003-01-01

    A system and method compares combinations of vehicle variable values against known combinations of potentially dangerous vehicle input signal values. Alarms and error messages are selectively generated based on such comparisons. An aircraft signal definition is provided to enable definition and monitoring of sets of aircraft input signals to customize such signals for different aircraft. The input signals are compared against known combinations of potentially dangerous values by operational software and hardware of a monitoring function. The aircraft signal definition is created using a text editor or custom application. A compiler receives the aircraft signal definition to generate a binary file that comprises the definition of all the input signals used by the monitoring function. The binary file also contains logic that specifies how the inputs are to be interpreted. The file is then loaded into the monitor function, where it is validated and used to continuously monitor the condition of the aircraft.

  16. Development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms.

    PubMed

    Motamarri, Srinivas; Boccelli, Dominic L

    2012-09-15

    Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Computational methods in the development of a knowledge-based system for the prediction of solid catalyst performance.

    PubMed

    Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi

    2007-01-01

    The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.

  18. Selection of key ambient particulate variables for epidemiological studies - applying cluster and heatmap analyses as tools for data reduction.

    PubMed

    Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef

    2012-10-01

    The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.

  19. Variable input observer for structural health monitoring of high-rate systems

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob

    2017-02-01

    The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.

  20. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan : part 2 - evaluation of rehabilitation fixes (part 1).

    DOT National Transportation Integrated Search

    2013-08-01

    The main objectives of Task 2 of the project were to determine the impact of various input variables on the predicted pavement performance for the selected rehabilitation design alternatives in the MEPDG/DARWin-ME, and to verify the pavement performa...

  1. Variable frequency microprocessor clock generator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branson, C.N.

    A microprocessor-based system is described comprising: a digital central microprocessor provided with a clock input and having a rate of operation determined by the frequency of a clock signal input thereto; memory means operably coupled to the central microprocessor for storing programs respectively including a plurality of instructions and addressable by the central microprocessor; peripheral device operably connected to the central microprocessor, the first peripheral device being addressable by the central microprocessor for control thereby; a system clock generator for generating a digital reference clock signal having a reference frequency rate; and frequency rate reduction circuit means connected between themore » clock generator and the clock input of the central microprocessor for selectively dividing the reference clock signal to generate a microprocessor clock signal as an input to the central microprocessor for clocking the central microprocessor.« less

  2. Mixed oxidizer hybrid propulsion system optimization under uncertainty using applied response surface methodology and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Whitehead, James Joshua

    The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.

  3. Radiograph and passive data analysis using mixed variable optimization

    DOEpatents

    Temple, Brian A.; Armstrong, Jerawan C.; Buescher, Kevin L.; Favorite, Jeffrey A.

    2015-06-02

    Disclosed herein are representative embodiments of methods, apparatus, and systems for performing radiography analysis. For example, certain embodiments perform radiographic analysis using mixed variable computation techniques. One exemplary system comprises a radiation source, a two-dimensional detector for detecting radiation transmitted through a object between the radiation source and detector, and a computer. In this embodiment, the computer is configured to input the radiographic image data from the two-dimensional detector and to determine one or more materials that form the object by using an iterative analysis technique that selects the one or more materials from hierarchically arranged solution spaces of discrete material possibilities and selects the layer interfaces from the optimization of the continuous interface data.

  4. SAMPLING OSCILLOSCOPE

    DOEpatents

    Sugarman, R.M.

    1960-08-30

    An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.

  5. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

    NASA Astrophysics Data System (ADS)

    De Lucia, Frank C., Jr.; Gottfried, Jennifer L.

    2011-02-01

    Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

  6. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres-Focus on Feature Selection.

    PubMed

    Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.

  7. Computational Intelligence Modeling of the Macromolecules Release from PLGA Microspheres—Focus on Feature Selection

    PubMed Central

    Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander

    2016-01-01

    Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205

  8. Linked population and economic models: some methodological issues in forecasting, analysis, and policy optimization.

    PubMed

    Madden, M; Batey Pwj

    1983-05-01

    Some problems associated with demographic-economic forecasting include finding models appropriate for a declining economy with unemployment, using a multiregional approach in an interregional model, finding a way to show differential consumption while endogenizing unemployment, and avoiding unemployment inconsistencies. The solution to these problems involves the construction of an activity-commodity framework, locating it within a group of forecasting models, and indicating possible ratios towards dynamization of the framework. The authors demonstrate the range of impact multipliers that can be derived from the framework and show how these multipliers relate to Leontief input-output multipliers. It is shown that desired population distribution may be obtained by selecting instruments from the economic sphere to produce, through the constraints vector of an activity-commodity framework, targets selected from demographic activities. The next step in this process, empirical exploitation, was carried out by the authors in the United Kingdom, linking an input-output model with a wide selection of demographic and demographic-economic variables. The generally tenuous control which government has over any variables in systems of this type, especially in market economies, makes application in the policy field of the optimization approach a partly conjectural exercise, although the analytic capacity of the approach can provide clear indications of policy directions.

  9. Statistical model selection for better prediction and discovering science mechanisms that affect reliability

    DOE PAGES

    Anderson-Cook, Christine M.; Morzinski, Jerome; Blecker, Kenneth D.

    2015-08-19

    Understanding the impact of production, environmental exposure and age characteristics on the reliability of a population is frequently based on underlying science and empirical assessment. When there is incomplete science to prescribe which inputs should be included in a model of reliability to predict future trends, statistical model/variable selection techniques can be leveraged on a stockpile or population of units to improve reliability predictions as well as suggest new mechanisms affecting reliability to explore. We describe a five-step process for exploring relationships between available summaries of age, usage and environmental exposure and reliability. The process involves first identifying potential candidatemore » inputs, then second organizing data for the analysis. Third, a variety of models with different combinations of the inputs are estimated, and fourth, flexible metrics are used to compare them. As a result, plots of the predicted relationships are examined to distill leading model contenders into a prioritized list for subject matter experts to understand and compare. The complexity of the model, quality of prediction and cost of future data collection are all factors to be considered by the subject matter experts when selecting a final model.« less

  10. When Can Information from Ordinal Scale Variables Be Integrated?

    ERIC Educational Resources Information Center

    Kemp, Simon; Grace, Randolph C.

    2010-01-01

    Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…

  11. Development of an automated energy audit protocol for office buildings

    NASA Astrophysics Data System (ADS)

    Deb, Chirag

    This study aims to enhance the building energy audit process, and bring about reduction in time and cost requirements in the conduction of a full physical audit. For this, a total of 5 Energy Service Companies in Singapore have collaborated and provided energy audit reports for 62 office buildings. Several statistical techniques are adopted to analyse these reports. These techniques comprise cluster analysis and development of prediction models to predict energy savings for buildings. The cluster analysis shows that there are 3 clusters of buildings experiencing different levels of energy savings. To understand the effect of building variables on the change in EUI, a robust iterative process for selecting the appropriate variables is developed. The results show that the 4 variables of GFA, non-air-conditioning energy consumption, average chiller plant efficiency and installed capacity of chillers should be taken for clustering. This analysis is extended to the development of prediction models using linear regression and artificial neural networks (ANN). An exhaustive variable selection algorithm is developed to select the input variables for the two energy saving prediction models. The results show that the ANN prediction model can predict the energy saving potential of a given building with an accuracy of +/-14.8%.

  12. C-fuzzy variable-branch decision tree with storage and classification error rate constraints

    NASA Astrophysics Data System (ADS)

    Yang, Shiueng-Bien

    2009-10-01

    The C-fuzzy decision tree (CFDT), which is based on the fuzzy C-means algorithm, has recently been proposed. The CFDT is grown by selecting the nodes to be split according to its classification error rate. However, the CFDT design does not consider the classification time taken to classify the input vector. Thus, the CFDT can be improved. We propose a new C-fuzzy variable-branch decision tree (CFVBDT) with storage and classification error rate constraints. The design of the CFVBDT consists of two phases-growing and pruning. The CFVBDT is grown by selecting the nodes to be split according to the classification error rate and the classification time in the decision tree. Additionally, the pruning method selects the nodes to prune based on the storage requirement and the classification time of the CFVBDT. Furthermore, the number of branches of each internal node is variable in the CFVBDT. Experimental results indicate that the proposed CFVBDT outperforms the CFDT and other methods.

  13. Estimation and classification by sigmoids based on mutual information

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1994-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the mutual information between the input and the output of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's s method, applied to an estimated density, yields a recursive maximum likelihood estimator, consisting of a single internal layer of sigmoids, for a random variable or a random sequence. Applications to the diamond classification and to the prediction of a sun-spot process are demonstrated.

  14. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  15. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. PM(10) emission forecasting using artificial neural networks and genetic algorithm input variable optimization.

    PubMed

    Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A

    2013-01-15

    This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Interfacing sensory input with motor output: does the control architecture converge to a serial process along a single channel?

    PubMed Central

    van de Kamp, Cornelis; Gawthrop, Peter J.; Gollee, Henrik; Lakie, Martin; Loram, Ian D.

    2013-01-01

    Modular organization in control architecture may underlie the versatility of human motor control; but the nature of the interface relating sensory input through task-selection in the space of performance variables to control actions in the space of the elemental variables is currently unknown. Our central question is whether the control architecture converges to a serial process along a single channel? In discrete reaction time experiments, psychologists have firmly associated a serial single channel hypothesis with refractoriness and response selection [psychological refractory period (PRP)]. Recently, we developed a methodology and evidence identifying refractoriness in sustained control of an external single degree-of-freedom system. We hypothesize that multi-segmental whole-body control also shows refractoriness. Eight participants controlled their whole body to ensure a head marker tracked a target as fast and accurately as possible. Analysis showed enhanced delays in response to stimuli with close temporal proximity to the preceding stimulus. Consistent with our preceding work, this evidence is incompatible with control as a linear time invariant process. This evidence is consistent with a single-channel serial ballistic process within the intermittent control paradigm with an intermittent interval of around 0.5 s. A control architecture reproducing intentional human movement control must reproduce refractoriness. Intermittent control is designed to provide computational time for an online optimization process and is appropriate for flexible adaptive control. For human motor control we suggest that parallel sensory input converges to a serial, single channel process involving planning, selection, and temporal inhibition of alternative responses prior to low dimensional motor output. Such design could aid robots to reproduce the flexibility of human control. PMID:23675342

  18. Artificial neural networks modelling the prednisolone nanoprecipitation in microfluidic reactors.

    PubMed

    Ali, Hany S M; Blagden, Nicholas; York, Peter; Amani, Amir; Brook, Toni

    2009-06-28

    This study employs artificial neural networks (ANNs) to create a model to identify relationships between variables affecting drug nanoprecipitation using microfluidic reactors. The input variables examined were saturation levels of prednisolone, solvent and antisolvent flow rates, microreactor inlet angles and internal diameters, while particle size was the single output. ANNs software was used to analyse a set of data obtained by random selection of the variables. The developed model was then assessed using a separate set of validation data and provided good agreement with the observed results. The antisolvent flow rate was found to have the dominant role on determining final particle size.

  19. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  20. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  1. ZERO SUPPRESSION FOR RECORDERS

    DOEpatents

    Fort, W.G.S.

    1958-12-30

    A zero-suppression circuit for self-balancing recorder instruments is presented. The essential elements of the circuit include a converter-amplifier having two inputs, one for a reference voltage and the other for the signal voltage under analysis, and a servomotor with two control windings, one coupled to the a-c output of the converter-amplifier and the other receiving a reference input. Each input circuit to the converter-amplifier has a variable potentiometer and the sliders of the potentiometer are ganged together for movement by the servoinotor. The particular noveity of the circuit resides in the selection of resistance values for the potentiometer and a resistor in series with the potentiometer of the signal circuit to ensure the full value of signal voltage variation is impressed on a recorder mechanism driven by servomotor.

  2. Laminar Organization of Attentional Modulation in Macaque Visual Area V4.

    PubMed

    Nandy, Anirvan S; Nassi, Jonathan J; Reynolds, John H

    2017-01-04

    Attention is critical to perception, serving to select behaviorally relevant information for privileged processing. To understand the neural mechanisms of attention, we must discern how attentional modulation varies by cell type and across cortical layers. Here, we test whether attention acts non-selectively across cortical layers or whether it engages the laminar circuit in specific and selective ways. We find layer- and cell-class-specific differences in several different forms of attentional modulation in area V4. Broad-spiking neurons in the superficial layers exhibit attention-mediated increases in firing rate and decreases in variability. Spike count correlations are highest in the input layer and attention serves to reduce these correlations. Superficial and input layer neurons exhibit attention-dependent decreases in low-frequency (<10 Hz) coherence, but deep layer neurons exhibit increases in coherence in the beta and gamma frequency ranges. Our study provides a template for attention-mediated laminar information processing that might be applicable across sensory modalities. Copyright © 2017 Elsevier Inc. All rights reserved.

  3. Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.

    PubMed

    Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi

    2013-05-01

    Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.

  4. QKD Via a Quantum Wavelength Router Using Spatial Soliton

    NASA Astrophysics Data System (ADS)

    Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.

    2011-05-01

    A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.

  5. Artificial neural networks for modeling ammonia emissions released from sewage sludge composting

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.

    2012-09-01

    The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).

  6. Reprogrammable read only variable threshold transistor memory with isolated addressing buffer

    DOEpatents

    Lodi, Robert J.

    1976-01-01

    A monolithic integrated circuit, fully decoded memory comprises a rectangular array of variable threshold field effect transistors organized into a plurality of multi-bit words. Binary address inputs to the memory are decoded by a field effect transistor decoder into a plurality of word selection lines each of which activates an address buffer circuit. Each address buffer circuit, in turn, drives a word line of the memory array. In accordance with the word line selected by the decoder the activated buffer circuit directs reading or writing voltages to the transistors comprising the memory words. All of the buffer circuits additionally are connected to a common terminal for clearing all of the memory transistors to a predetermined state by the application to the common terminal of a large magnitude voltage of a predetermined polarity. The address decoder, the buffer and the memory array, as well as control and input/output control and buffer field effect transistor circuits, are fabricated on a common substrate with means provided to isolate the substrate of the address buffer transistors from the remainder of the substrate so that the bulk clearing function of simultaneously placing all of the memory transistors into a predetermined state can be performed.

  7. A family of variable step-size affine projection adaptive filter algorithms using statistics of channel impulse response

    NASA Astrophysics Data System (ADS)

    Shams Esfand Abadi, Mohammad; AbbasZadeh Arani, Seyed Ali Asghar

    2011-12-01

    This paper extends the recently introduced variable step-size (VSS) approach to the family of adaptive filter algorithms. This method uses prior knowledge of the channel impulse response statistic. Accordingly, optimal step-size vector is obtained by minimizing the mean-square deviation (MSD). The presented algorithms are the VSS affine projection algorithm (VSS-APA), the VSS selective partial update NLMS (VSS-SPU-NLMS), the VSS-SPU-APA, and the VSS selective regressor APA (VSS-SR-APA). In VSS-SPU adaptive algorithms the filter coefficients are partially updated which reduce the computational complexity. In VSS-SR-APA, the optimal selection of input regressors is performed during the adaptation. The presented algorithms have good convergence speed, low steady state mean square error (MSE), and low computational complexity features. We demonstrate the good performance of the proposed algorithms through several simulations in system identification scenario.

  8. SSME/side loads analysis for flight configuration, revision A. [structural analysis of space shuttle main engine under side load excitation

    NASA Technical Reports Server (NTRS)

    Holland, W.

    1974-01-01

    This document describes the dynamic loads analysis accomplished for the Space Shuttle Main Engine (SSME) considering the side load excitation associated with transient flow separation on the engine bell during ground ignition. The results contained herein pertain only to the flight configuration. A Monte Carlo procedure was employed to select the input variables describing the side load excitation and the loads were statistically combined. This revision includes an active thrust vector control system representation and updated orbiter thrust structure stiffness characteristics. No future revisions are planned but may be necessary as system definition and input parameters change.

  9. Optical switch using Risley prisms

    DOEpatents

    Sweatt, William C.; Christenson, Todd R.

    2003-04-15

    An optical switch using Risley prisms and rotary microactuators to independently rotate the wedge prisms of each Risley prism pair is disclosed. The optical switch comprises an array of input Risley prism pairs that selectively redirect light beams from a plurality of input ports to an array of output Risley prism pairs that similarly direct the light beams to a plurality of output ports. Each wedge prism of each Risley prism pair can be independently rotated by a variable-reluctance stepping rotary microactuator that is fabricated by a multi-layer LIGA process. Each wedge prism can be formed integral to the annular rotor of the rotary microactuator by a DXRL process.

  10. Optical Switch Using Risley Prisms

    DOEpatents

    Sweatt, William C.; Christenson, Todd R.

    2005-02-22

    An optical switch using Risley prisms and rotary microactuators to independently rotate the wedge prisms of each Risley prism pair is disclosed. The optical switch comprises an array of input Risley prism pairs that selectively redirect light beams from a plurality of input ports to an array of output Risley prism pairs that similarly direct the light beams to a plurality of output ports. Each wedge prism of each Risley prism pair can be independently rotated by a variable-reluctance stepping rotary microactuator that is fabricated by a multi-layer LIGA process. Each wedge prism can be formed integral to the annular rotor of the rotary microactuator by a DXRL process.

  11. Input selection and performance optimization of ANN-based streamflow forecasts in the drought-prone Murray Darling Basin region using IIS and MODWT algorithm

    NASA Astrophysics Data System (ADS)

    Prasad, Ramendra; Deo, Ravinesh C.; Li, Yan; Maraseni, Tek

    2017-11-01

    Forecasting streamflow is vital for strategically planning, utilizing and redistributing water resources. In this paper, a wavelet-hybrid artificial neural network (ANN) model integrated with iterative input selection (IIS) algorithm (IIS-W-ANN) is evaluated for its statistical preciseness in forecasting monthly streamflow, and it is then benchmarked against M5 Tree model. To develop hybrid IIS-W-ANN model, a global predictor matrix is constructed for three local hydrological sites (Richmond, Gwydir, and Darling River) in Australia's agricultural (Murray-Darling) Basin. Model inputs comprised of statistically significant lagged combination of streamflow water level, are supplemented by meteorological data (i.e., precipitation, maximum and minimum temperature, mean solar radiation, vapor pressure and evaporation) as the potential model inputs. To establish robust forecasting models, iterative input selection (IIS) algorithm is applied to screen the best data from the predictor matrix and is integrated with the non-decimated maximum overlap discrete wavelet transform (MODWT) applied on the IIS-selected variables. This resolved the frequencies contained in predictor data while constructing a wavelet-hybrid (i.e., IIS-W-ANN and IIS-W-M5 Tree) model. Forecasting ability of IIS-W-ANN is evaluated via correlation coefficient (r), Willmott's Index (WI), Nash-Sutcliffe Efficiency (ENS), root-mean-square-error (RMSE), and mean absolute error (MAE), including the percentage RMSE and MAE. While ANN models are seen to outperform M5 Tree executed for all hydrological sites, the IIS variable selector was efficient in determining the appropriate predictors, as stipulated by the better performance of the IIS coupled (ANN and M5 Tree) models relative to the models without IIS. When IIS-coupled models are integrated with MODWT, the wavelet-hybrid IIS-W-ANN and IIS-W-M5 Tree are seen to attain significantly accurate performance relative to their standalone counterparts. Importantly, IIS-W-ANN model accuracy outweighs IIS-ANN, as evidenced by a larger r and WI (by 7.5% and 3.8%, respectively) and a lower RMSE (by 21.3%). In comparison to the IIS-W-M5 Tree model, IIS-W-ANN model yielded larger values of WI = 0.936-0.979 and ENS = 0.770-0.920. Correspondingly, the errors (RMSE and MAE) ranged from 0.162-0.487 m and 0.139-0.390 m, respectively, with relative errors, RRMSE = (15.65-21.00) % and MAPE = (14.79-20.78) %. Distinct geographic signature is evident where the most and least accurately forecasted streamflow data is attained for the Gwydir and Darling River, respectively. Conclusively, this study advocates the efficacy of iterative input selection, allowing the proper screening of model predictors, and subsequently, its integration with MODWT resulting in enhanced performance of the models applied in streamflow forecasting.

  12. Variable frequency microwave furnace system

    DOEpatents

    Bible, D.W.; Lauf, R.J.

    1994-06-14

    A variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity for testing or other selected applications. The variable frequency microwave furnace system includes a microwave signal generator or microwave voltage-controlled oscillator for generating a low-power microwave signal for input to the microwave furnace. A first amplifier may be provided to amplify the magnitude of the signal output from the microwave signal generator or the microwave voltage-controlled oscillator. A second amplifier is provided for processing the signal output by the first amplifier. The second amplifier outputs the microwave signal input to the furnace cavity. In the preferred embodiment, the second amplifier is a traveling-wave tube (TWT). A power supply is provided for operation of the second amplifier. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load. 5 figs.

  13. Assessing risk based on uncertain avalanche activity patterns

    NASA Astrophysics Data System (ADS)

    Zeidler, Antonia; Fromm, Reinhard

    2015-04-01

    Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.

  14. Proposing integrated Shannon's entropy-inverse data envelopment analysis methods for resource allocation problem under a fuzzy environment

    NASA Astrophysics Data System (ADS)

    Çakır, Süleyman

    2017-10-01

    In this study, a two-phase methodology for resource allocation problems under a fuzzy environment is proposed. In the first phase, the imprecise Shannon's entropy method and the acceptability index are suggested, for the first time in the literature, to select input and output variables to be used in the data envelopment analysis (DEA) application. In the second step, an interval inverse DEA model is executed for resource allocation in a short run. In an effort to exemplify the practicality of the proposed fuzzy model, a real case application has been conducted involving 16 cement firms listed in Borsa Istanbul. The results of the case application indicated that the proposed hybrid model is a viable procedure to handle input-output selection and resource allocation problems under fuzzy conditions. The presented methodology can also lend itself to different applications such as multi-criteria decision-making problems.

  15. A selective-update affine projection algorithm with selective input vectors

    NASA Astrophysics Data System (ADS)

    Kong, NamWoong; Shin, JaeWook; Park, PooGyeon

    2011-10-01

    This paper proposes an affine projection algorithm (APA) with selective input vectors, which based on the concept of selective-update in order to reduce estimation errors and computations. The algorithm consists of two procedures: input- vector-selection and state-decision. The input-vector-selection procedure determines the number of input vectors by checking with mean square error (MSE) whether the input vectors have enough information for update. The state-decision procedure determines the current state of the adaptive filter by using the state-decision criterion. As the adaptive filter is in transient state, the algorithm updates the filter coefficients with the selected input vectors. On the other hand, as soon as the adaptive filter reaches the steady state, the update procedure is not performed. Through these two procedures, the proposed algorithm achieves small steady-state estimation errors, low computational complexity and low update complexity for colored input signals.

  16. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model.

    PubMed

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders' expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day's price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately.

  17. Predicting the Direction of Stock Market Index Movement Using an Optimized Artificial Neural Network Model

    PubMed Central

    Qiu, Mingyue; Song, Yu

    2016-01-01

    In the business sector, it has always been a difficult task to predict the exact daily price of the stock market index; hence, there is a great deal of research being conducted regarding the prediction of the direction of stock price index movement. Many factors such as political events, general economic conditions, and traders’ expectations may have an influence on the stock market index. There are numerous research studies that use similar indicators to forecast the direction of the stock market index. In this study, we compare two basic types of input variables to predict the direction of the daily stock market index. The main contribution of this study is the ability to predict the direction of the next day’s price of the Japanese stock market index by using an optimized artificial neural network (ANN) model. To improve the prediction accuracy of the trend of the stock market index in the future, we optimize the ANN model using genetic algorithms (GA). We demonstrate and verify the predictability of stock price direction by using the hybrid GA-ANN model and then compare the performance with prior studies. Empirical results show that the Type 2 input variables can generate a higher forecast accuracy and that it is possible to enhance the performance of the optimized ANN model by selecting input variables appropriately. PMID:27196055

  18. A stochastic model of input effectiveness during irregular gamma rhythms.

    PubMed

    Dumont, Grégory; Northoff, Georg; Longtin, André

    2016-02-01

    Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.

  19. Method and apparatus for varying accelerator beam output energy

    DOEpatents

    Young, Lloyd M.

    1998-01-01

    A coupled cavity accelerator (CCA) accelerates a charged particle beam with rf energy from a rf source. An input accelerating cavity receives the charged particle beam and an output accelerating cavity outputs the charged particle beam at an increased energy. Intermediate accelerating cavities connect the input and the output accelerating cavities to accelerate the charged particle beam. A plurality of tunable coupling cavities are arranged so that each one of the tunable coupling cavities respectively connect an adjacent pair of the input, output, and intermediate accelerating cavities to transfer the rf energy along the accelerating cavities. An output tunable coupling cavity can be detuned to variably change the phase of the rf energy reflected from the output coupling cavity so that regions of the accelerator can be selectively turned off when one of the intermediate tunable coupling cavities is also detuned.

  20. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    2016-12-01

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  1. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  2. Estimating severity of sideways fall using a generic multi linear regression model based on kinematic input variables.

    PubMed

    van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V

    2017-03-21

    Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. National Hospital Input Price Index

    PubMed Central

    Freeland, Mark S.; Anderson, Gerard; Schendler, Carol Ellen

    1979-01-01

    The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 percent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies. PMID:10309052

  4. National hospital input price index.

    PubMed

    Freeland, M S; Anderson, G; Schendler, C E

    1979-01-01

    The national community hospital input price index presented here isolates the effects of prices of goods and services required to produce hospital care and measures the average percent change in prices for a fixed market basket of hospital inputs. Using the methodology described in this article, weights for various expenditure categories were estimated and proxy price variables associated with each were selected. The index is calculated for the historical period 1970 through 1978 and forecast for 1979 through 1981. During the historical period, the input price index increased an average of 8.0 percent a year, compared with an average rate of increase of 6.6 percent for overall consumer prices. For the period 1979 through 1981, the average annual increase is forecast at between 8.5 and 9.0 per cent. Using the index to deflate growth in expenses, the level of real growth in expenditures per inpatient day (net service intensity growth) averaged 4.5 percent per year with considerable annual variation related to government and hospital industry policies.

  5. A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.

    PubMed

    Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P

    2014-12-15

    Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.

  6. How sensitive are estimates of carbon fixation in agricultural models to input data?

    PubMed Central

    2012-01-01

    Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931

  7. FLUXCOM - Overview and First Synthesis

    NASA Astrophysics Data System (ADS)

    Jung, M.; Ichii, K.; Tramontana, G.; Camps-Valls, G.; Schwalm, C. R.; Papale, D.; Reichstein, M.; Gans, F.; Weber, U.

    2015-12-01

    We present a community effort aiming at generating an ensemble of global gridded flux products by upscaling FLUXNET data using an array of different machine learning methods including regression/model tree ensembles, neural networks, and kernel machines. We produced products for gross primary production, terrestrial ecosystem respiration, net ecosystem exchange, latent heat, sensible heat, and net radiation for two experimental protocols: 1) at a high spatial and 8-daily temporal resolution (5 arc-minute) using only remote sensing based inputs for the MODIS era; 2) 30 year records of daily, 0.5 degree spatial resolution by incorporating meteorological driver data. Within each set-up, all machine learning methods were trained with the same input data for carbon and energy fluxes respectively. Sets of input driver variables were derived using an extensive formal variable selection exercise. The performance of the extrapolation capacities of the approaches is assessed with a fully internally consistent cross-validation. We perform cross-consistency checks of the gridded flux products with independent data streams from atmospheric inversions (NEE), sun-induced fluorescence (GPP), catchment water balances (LE, H), satellite products (Rn), and process-models. We analyze the uncertainties of the gridded flux products and for example provide a breakdown of the uncertainty of mean annual GPP originating from different machine learning methods, different climate input data sets, and different flux partitioning methods. The FLUXCOM archive will provide an unprecedented source of information for water, energy, and carbon cycle studies.

  8. Assessment of food intake input distributions for use in probabilistic exposure assessments of food additives.

    PubMed

    Gilsenan, M B; Lambe, J; Gibney, M J

    2003-11-01

    A key component of a food chemical exposure assessment using probabilistic analysis is the selection of the most appropriate input distribution to represent exposure variables. The study explored the type of parametric distribution that could be used to model variability in food consumption data likely to be included in a probabilistic exposure assessment of food additives. The goodness-of-fit of a range of continuous distributions to observed data of 22 food categories expressed as average daily intakes among consumers from the North-South Ireland Food Consumption Survey was assessed using the BestFit distribution fitting program. The lognormal distribution was most commonly accepted as a plausible parametric distribution to represent food consumption data when food intakes were expressed as absolute intakes (16/22 foods) and as intakes per kg body weight (18/22 foods). Results from goodness-of-fit tests were accompanied by lognormal probability plots for a number of food categories. The influence on food additive intake of using a lognormal distribution to model food consumption input data was assessed by comparing modelled intake estimates with observed intakes. Results from the present study advise some level of caution about the use of a lognormal distribution as a mode of input for food consumption data in probabilistic food additive exposure assessments and the results highlight the need for further research in this area.

  9. Uncertainty Analysis for a Jet Flap Airfoil

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Cruz, Josue

    2006-01-01

    An analysis of variance (ANOVA) study was performed to quantify the potential uncertainties of lift and pitching moment coefficient calculations from a computational fluid dynamics code, relative to an experiment, for a jet flap airfoil configuration. Uncertainties due to a number of factors including grid density, angle of attack and jet flap blowing coefficient were examined. The ANOVA software produced a numerical model of the input coefficient data, as functions of the selected factors, to a user-specified order (linear, 2-factor interference, quadratic, or cubic). Residuals between the model and actual data were also produced at each of the input conditions, and uncertainty confidence intervals (in the form of Least Significant Differences or LSD) for experimental, computational, and combined experimental / computational data sets were computed. The LSD bars indicate the smallest resolvable differences in the functional values (lift or pitching moment coefficient) attributable solely to changes in independent variable, given just the input data points from selected data sets. The software also provided a collection of diagnostics which evaluate the suitability of the input data set for use within the ANOVA process, and which examine the behavior of the resultant data, possibly suggesting transformations which should be applied to the data to reduce the LSD. The results illustrate some of the key features of, and results from, the uncertainty analysis studies, including the use of both numerical (continuous) and categorical (discrete) factors, the effects of the number and range of the input data points, and the effects of the number of factors considered simultaneously.

  10. Response sensitivity of barrel neuron subpopulations to simulated thalamic input.

    PubMed

    Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J

    2010-06-01

    Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.

  11. Real-time flood forecasts & risk assessment using a possibility-theory based fuzzy neural network

    NASA Astrophysics Data System (ADS)

    Khan, U. T.

    2016-12-01

    Globally floods are one of the most devastating natural disasters and improved flood forecasting methods are essential for better flood protection in urban areas. Given the availability of high resolution real-time datasets for flood variables (e.g. streamflow and precipitation) in many urban areas, data-driven models have been effectively used to predict peak flow rates in river; however, the selection of input parameters for these types of models is often subjective. Additionally, the inherit uncertainty associated with data models along with errors in extreme event observations means that uncertainty quantification is essential. Addressing these concerns will enable improved flood forecasting methods and provide more accurate flood risk assessments. In this research, a new type of data-driven model, a quasi-real-time updating fuzzy neural network is developed to predict peak flow rates in urban riverine watersheds. A possibility-to-probability transformation is first used to convert observed data into fuzzy numbers. A possibility theory based training regime is them used to construct the fuzzy parameters and the outputs. A new entropy-based optimisation criterion is used to train the network. Two existing methods to select the optimum input parameters are modified to account for fuzzy number inputs, and compared. These methods are: Entropy-Wavelet-based Artificial Neural Network (EWANN) and Combined Neural Pathway Strength Analysis (CNPSA). Finally, an automated algorithm design to select the optimum structure of the neural network is implemented. The overall impact of each component of training this network is to replace the traditional ad hoc network configuration methods, with one based on objective criteria. Ten years of data from the Bow River in Calgary, Canada (including two major floods in 2005 and 2013) are used to calibrate and test the network. The EWANN method selected lagged peak flow as a candidate input, whereas the CNPSA method selected lagged precipitation and lagged mean daily flow as candidate inputs. Model performance metric show that the CNPSA method had higher performance (with an efficiency of 0.76). Model output was used to assess the risk of extreme peak flows for a given day using an inverse possibility-to-probability transformation.

  12. An affine projection algorithm using grouping selection of input vectors

    NASA Astrophysics Data System (ADS)

    Shin, JaeWook; Kong, NamWoong; Park, PooGyeon

    2011-10-01

    This paper present an affine projection algorithm (APA) using grouping selection of input vectors. To improve the performance of conventional APA, the proposed algorithm adjusts the number of the input vectors using two procedures: grouping procedure and selection procedure. In grouping procedure, the some input vectors that have overlapping information for update is grouped using normalized inner product. Then, few input vectors that have enough information for for coefficient update is selected using steady-state mean square error (MSE) in selection procedure. Finally, the filter coefficients update using selected input vectors. The experimental results show that the proposed algorithm has small steady-state estimation errors comparing with the existing algorithms.

  13. Improved artificial neural networks in prediction of malignancy of lesions in contrast-enhanced MR-mammography.

    PubMed

    Vomweg, T W; Buscema, M; Kauczor, H U; Teifke, A; Intraligi, M; Terzi, S; Heussel, C P; Achenbach, T; Rieker, O; Mayer, D; Thelen, M

    2003-09-01

    The aim of this study was to evaluate the capability of improved artificial neural networks (ANN) and additional novel training methods in distinguishing between benign and malignant breast lesions in contrast-enhanced magnetic resonance-mammography (MRM). A total of 604 histologically proven cases of contrast-enhanced lesions of the female breast at MRI were analyzed. Morphological, dynamic and clinical parameters were collected and stored in a database. The data set was divided into several groups using random or experimental methods [Training & Testing (T&T) algorithm] to train and test different ANNs. An additional novel computer program for input variable selection was applied. Sensitivity and specificity were calculated and compared with a statistical method and an expert radiologist. After optimization of the distribution of cases among the training and testing sets by the T & T algorithm and the reduction of input variables by the Input Selection procedure a highly sophisticated ANN achieved a sensitivity of 93.6% and a specificity of 91.9% in predicting malignancy of lesions within an independent prediction sample set. The best statistical method reached a sensitivity of 90.5% and a specificity of 68.9%. An expert radiologist performed better than the statistical method but worse than the ANN (sensitivity 92.1%, specificity 85.6%). Features extracted out of dynamic contrast-enhanced MRM and additional clinical data can be successfully analyzed by advanced ANNs. The quality of the resulting network strongly depends on the training methods, which are improved by the use of novel training tools. The best results of an improved ANN outperform expert radiologists.

  14. Hand placement near the visual stimulus improves orientation selectivity in V2 neurons

    PubMed Central

    Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar

    2015-01-01

    Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165

  15. Bottom-up and Top-down Input Augment the Variability of Cortical Neurons

    PubMed Central

    Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.

    2016-01-01

    SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459

  16. User's manual for XTRAN2L (version 1.2): A program for solving the general-frequency unsteady transonic small-disturbance equation

    NASA Technical Reports Server (NTRS)

    Seidel, D. A.; Batina, J. T.

    1986-01-01

    The development, use and operation of the XTRAN2L program that solves the two dimensional unsteady transonic small disturbance potential equation are described. The XTRAN2L program is used to calculate steady and unsteady transonic flow fields about airfoils and is capable of performing self contained transonic flutter calculations. Operation of the XTRAN2L code is described, and tables defining all input variables, including default values, are presented. Sample cases that use various program options are shown to illustrate operation of XTRAN2L. Computer listings containing input and selected output are included as an aid to the user.

  17. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  18. The application of improved NeuroEvolution of Augmenting Topologies neural network in Marcellus Shale lithofacies prediction

    NASA Astrophysics Data System (ADS)

    Wang, Guochang; Cheng, Guojian; Carr, Timothy R.

    2013-04-01

    The organic-rich Marcellus Shale was deposited in a foreland basin during Middle Devonian. In terms of mineral composition and organic matter richness, we define seven mudrock lithofacies: three organic-rich lithofacies and four organic-poor lithofacies. The 3D lithofacies model is very helpful to determine geologic and engineering sweet spots, and consequently useful for designing horizontal well trajectories and stimulation strategies. The NeuroEvolution of Augmenting Topologies (NEAT) is relatively new idea in the design of neural networks, and shed light on classification (i.e., Marcellus Shale lithofacies prediction). We have successfully enhanced the capability and efficiency of NEAT in three aspects. First, we introduced two new attributes of node gene, the node location and recurrent connection (RCC), to increase the calculation efficiency. Second, we evolved the population size from an initial small value to big, instead of using the constant value, which saves time and computer memory, especially for complex learning tasks. Third, in multiclass pattern recognition problems, we combined feature selection of input variables and modular neural network to automatically select input variables and optimize network topology for each binary classifier. These improvements were tested and verified by true if an odd number of its arguments are true and false otherwise (XOR) experiments, and were powerful for classification.

  19. Modelling the meteorological forest fire niche in heterogeneous pyrologic conditions.

    PubMed

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition.

  20. Modelling the Meteorological Forest Fire Niche in Heterogeneous Pyrologic Conditions

    PubMed Central

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition. PMID:25679957

  1. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  2. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  3. Control design methods for floating wind turbines for optimal disturbance rejection

    NASA Astrophysics Data System (ADS)

    Lemmer, Frank; Schlipf, David; Cheng, Po Wen

    2016-09-01

    An analysis of the floating wind turbine as a multi-input-multi-output system investigating the effect of the control inputs on the system outputs is shown. These effects are compared to the ones of the disturbances from wind and waves in order to give insights for the selection of the control layout. The frequencies with the largest impact on the outputs due to limited effect of the controlled variables are identified. Finally, an optimal controller is designed as a benchmark and compared to a conventional PI-controller using only the rotor speed as input. Here, the previously found system properties, especially the difficulties to damp responses to wave excitation, are confirmed and verified through a spectral analysis with realistic environmental conditions. This comparison also assesses the quality of the employed simplified linear simulation model compared to the nonlinear model and shows that such an efficient frequency-domain evaluation for control design is feasible.

  4. LTCP 2D Graphical User Interface. Application Description and User's Guide

    NASA Technical Reports Server (NTRS)

    Ball, Robert; Navaz, Homayun K.

    1996-01-01

    A graphical user interface (GUI) written for NASA's LTCP (Liquid Thrust Chamber Performance) 2 dimensional computational fluid dynamic code is described. The GUI is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. Through the use of common and familiar dialog boxes, features, and tools, the user can easily and quickly create and modify input files for the LTCP code. In addition, old input files used with the LTCP code can be opened and modified using the GUI. The application is written in C++ for a desktop personal computer running under a Microsoft Windows operating environment. The program and its capabilities are presented, followed by a detailed description of each menu selection and the method of creating an input file for LTCP. A cross reference is included to help experienced users quickly find the variables which commonly need changes. Finally, the system requirements and installation instructions are provided.

  5. Deterministic quantum teleportation of photonic quantum bits by a hybrid technique.

    PubMed

    Takeda, Shuntaro; Mizuta, Takahiro; Fuwa, Maria; van Loock, Peter; Furusawa, Akira

    2013-08-15

    Quantum teleportation allows for the transfer of arbitrary unknown quantum states from a sender to a spatially distant receiver, provided that the two parties share an entangled state and can communicate classically. It is the essence of many sophisticated protocols for quantum communication and computation. Photons are an optimal choice for carrying information in the form of 'flying qubits', but the teleportation of photonic quantum bits (qubits) has been limited by experimental inefficiencies and restrictions. Main disadvantages include the fundamentally probabilistic nature of linear-optics Bell measurements, as well as the need either to destroy the teleported qubit or attenuate the input qubit when the detectors do not resolve photon numbers. Here we experimentally realize fully deterministic quantum teleportation of photonic qubits without post-selection. The key step is to make use of a hybrid technique involving continuous-variable teleportation of a discrete-variable, photonic qubit. When the receiver's feedforward gain is optimally tuned, the continuous-variable teleporter acts as a pure loss channel, and the input dual-rail-encoded qubit, based on a single photon, represents a quantum error detection code against photon loss and hence remains completely intact for most teleportation events. This allows for a faithful qubit transfer even with imperfect continuous-variable entangled states: for four qubits the overall transfer fidelities range from 0.79 to 0.82 and all of them exceed the classical limit of teleportation. Furthermore, even for a relatively low level of the entanglement, qubits are teleported much more efficiently than in previous experiments, albeit post-selectively (taking into account only the qubit subspaces), and with a fidelity comparable to the previously reported values.

  6. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks.

    PubMed

    Liu, Da; Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012.

  7. Forecast Modelling via Variations in Binary Image-Encoded Information Exploited by Deep Learning Neural Networks

    PubMed Central

    Xu, Ming; Niu, Dongxiao; Wang, Shoukai; Liang, Sai

    2016-01-01

    Traditional forecasting models fit a function approximation from dependent invariables to independent variables. However, they usually get into trouble when date are presented in various formats, such as text, voice and image. This study proposes a novel image-encoded forecasting method that input and output binary digital two-dimensional (2D) images are transformed from decimal data. Omitting any data analysis or cleansing steps for simplicity, all raw variables were selected and converted to binary digital images as the input of a deep learning model, convolutional neural network (CNN). Using shared weights, pooling and multiple-layer back-propagation techniques, the CNN was adopted to locate the nexus among variations in local binary digital images. Due to the computing capability that was originally developed for binary digital bitmap manipulation, this model has significant potential for forecasting with vast volume of data. The model was validated by a power loads predicting dataset from the Global Energy Forecasting Competition 2012. PMID:27281032

  8. Modelling daily dissolved oxygen concentration using least square support vector machine, multivariate adaptive regression splines and M5 model tree

    NASA Astrophysics Data System (ADS)

    Heddam, Salim; Kisi, Ozgur

    2018-04-01

    In the present study, three types of artificial intelligence techniques, least square support vector machine (LSSVM), multivariate adaptive regression splines (MARS) and M5 model tree (M5T) are applied for modeling daily dissolved oxygen (DO) concentration using several water quality variables as inputs. The DO concentration and water quality variables data from three stations operated by the United States Geological Survey (USGS) were used for developing the three models. The water quality data selected consisted of daily measured of water temperature (TE, °C), pH (std. unit), specific conductance (SC, μS/cm) and discharge (DI cfs), are used as inputs to the LSSVM, MARS and M5T models. The three models were applied for each station separately and compared to each other. According to the results obtained, it was found that: (i) the DO concentration could be successfully estimated using the three models and (ii) the best model among all others differs from one station to another.

  9. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.

  10. Decision tree modeling using R.

    PubMed

    Zhang, Zhongheng

    2016-08-01

    In machine learning field, decision tree learner is powerful and easy to interpret. It employs recursive binary partitioning algorithm that splits the sample in partitioning variable with the strongest association with the response variable. The process continues until some stopping criteria are met. In the example I focus on conditional inference tree, which incorporates tree-structured regression models into conditional inference procedures. While growing a single tree is subject to small changes in the training data, random forests procedure is introduced to address this problem. The sources of diversity for random forests come from the random sampling and restricted set of input variables to be selected. Finally, I introduce R functions to perform model based recursive partitioning. This method incorporates recursive partitioning into conventional parametric model building.

  11. A hybrid machine learning model to predict and visualize nitrate concentration throughout the Central Valley aquifer, California, USA

    USGS Publications Warehouse

    Ransom, Katherine M.; Nolan, Bernard T.; Traum, Jonathan A.; Faunt, Claudia; Bell, Andrew M.; Gronberg, Jo Ann M.; Wheeler, David C.; Zamora, Celia; Jurgens, Bryant; Schwarz, Gregory E.; Belitz, Kenneth; Eberts, Sandra; Kourakos, George; Harter, Thomas

    2017-01-01

    Intense demand for water in the Central Valley of California and related increases in groundwater nitrate concentration threaten the sustainability of the groundwater resource. To assess contamination risk in the region, we developed a hybrid, non-linear, machine learning model within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface. A database of 145 predictor variables representing well characteristics, historical and current field and landscape-scale nitrogen mass balances, historical and current land use, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The boosted regression tree (BRT) method was used to screen and rank variables to predict nitrate concentration at the depths of domestic and public well supplies. The novel approach included as predictor variables outputs from existing physically based models of the Central Valley. The top five most important predictor variables included two oxidation/reduction variables (probability of manganese concentration to exceed 50 ppb and probability of dissolved oxygen concentration to be below 0.5 ppm), field-scale adjusted unsaturated zone nitrogen input for the 1975 time period, average difference between precipitation and evapotranspiration during the years 1971–2000, and 1992 total landscape nitrogen input. Twenty-five variables were selected for the final model for log-transformed nitrate. In general, increasing probability of anoxic conditions and increasing precipitation relative to potential evapotranspiration had a corresponding decrease in nitrate concentration predictions. Conversely, increasing 1975 unsaturated zone nitrogen leaching flux and 1992 total landscape nitrogen input had an increasing relative impact on nitrate predictions. Three-dimensional visualization indicates that nitrate predictions depend on the probability of anoxic conditions and other factors, and that nitrate predictions generally decreased with increasing groundwater age.

  12. Principal component-based weighted indices and a framework to evaluate indices: Results from the Medical Expenditure Panel Survey 1996 to 2011

    PubMed Central

    Wu, Chao-Jung

    2017-01-01

    Producing indices composed of multiple input variables has been embedded in some data processing and analytical methods. We aim to test the feasibility of creating data-driven indices by aggregating input variables according to principal component analysis (PCA) loadings. To validate the significance of both the theory-based and data-driven indices, we propose principles to review innovative indices. We generated weighted indices with the variables obtained in the first years of the two-year panels in the Medical Expenditure Panel Survey initiated between 1996 and 2011. Variables were weighted according to PCA loadings and summed. The statistical significance and residual deviance of each index to predict mortality in the second years was extracted from the results of discrete-time survival analyses. There were 237,832 surviving the first years of panels, represented 4.5 billion civilians in the United States, of which 0.62% (95% CI = 0.58% to 0.66%) died in the second years of the panels. Of all 134,689 weighted indices, there were 40,803 significantly predicting mortality in the second years with or without the adjustment of age, sex and races. The significant indices in the both models could at most lead to 10,200 years of academic tenure for individual researchers publishing four indices per year or 618.2 years of publishing for journals with annual volume of 66 articles. In conclusion, if aggregating information based on PCA loadings, there can be a large number of significant innovative indices composing input variables of various predictive powers. To justify the large quantities of innovative indices, we propose a reporting and review framework for novel indices based on the objectives to create indices, variable weighting, related outcomes and database characteristics. The indices selected by this framework could lead to a new genre of publications focusing on meaningful aggregation of information. PMID:28886057

  13. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  14. Study on the medical meteorological forecast of the number of hypertension inpatient based on SVR

    NASA Astrophysics Data System (ADS)

    Zhai, Guangyu; Chai, Guorong; Zhang, Haifeng

    2017-06-01

    The purpose of this study is to build a hypertension prediction model by discussing the meteorological factors for hypertension incidence. The research method is selecting the standard data of relative humidity, air temperature, visibility, wind speed and air pressure of Lanzhou from 2010 to 2012(calculating the maximum, minimum and average value with 5 days as a unit ) as the input variables of Support Vector Regression(SVR) and the standard data of hypertension incidence of the same period as the output dependent variables to obtain the optimal prediction parameters by cross validation algorithm, then by SVR algorithm learning and training, a SVR forecast model for hypertension incidence is built. The result shows that the hypertension prediction model is composed of 15 input independent variables, the training accuracy is 0.005, the final error is 0.0026389. The forecast accuracy based on SVR model is 97.1429%, which is higher than statistical forecast equation and neural network prediction method. It is concluded that SVR model provides a new method for hypertension prediction with its simple calculation, small error as well as higher historical sample fitting and Independent sample forecast capability.

  15. Variable frequency microwave furnace system

    DOEpatents

    Bible, Don W.; Lauf, Robert J.

    1994-01-01

    A variable frequency microwave furnace system (10) designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity (34) for testing or other selected applications. The variable frequency microwave furnace system (10) includes a microwave signal generator (12) or microwave voltage-controlled oscillator (14) for generating a low-power microwave signal for input to the microwave furnace. A first amplifier (18) may be provided to amplify the magnitude of the signal output from the microwave signal generator (12) or the microwave voltage-controlled oscillator (14). A second amplifier (20) is provided for processing the signal output by the first amplifier (18). The second amplifier (20) outputs the microwave signal input to the furnace cavity (34). In the preferred embodiment, the second amplifier (20) is a traveling-wave tube (TWT). A power supply (22) is provided for operation of the second amplifier (20). A directional coupler (24) is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter (30) is provided for measuring the power delivered to the microwave furnace (32). A second power meter (26) detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load (28).

  16. Our Selections and Decisions: Inherent Features of the Nervous System?

    NASA Astrophysics Data System (ADS)

    Rösler, Frank

    The chapter summarizes findings on the neuronal bases of decisionmaking. Taking the phenomenon of selection it will be explained that systems built only from excitatory and inhibitory neuron (populations) have the emergent property of selecting between different alternatives. These considerations suggest that there exists a hierarchical architecture with central selection switches. However, in such a system, functions of selection and decision-making are not localized, but rather emerge from an interaction of several participating networks. These are, on the one hand, networks that process specific input and output representations and, on the other hand, networks that regulate the relative activation/inhibition of the specific input and output networks. These ideas are supported by recent empirical evidence. Moreover, other studies show that rather complex psychological variables, like subjective probability estimates, expected gains and losses, prediction errors, etc., do have biological correlates, i.e., they can be localized in time and space as activation states of neural networks and single cells. These findings suggest that selections and decisions are consequences of an architecture which, seen from a biological perspective, is fully deterministic. However, a transposition of such nomothetic functional principles into the idiographic domain, i.e., using them as elements for comprehensive 'mechanistic' explanations of individual decisions, seems not to be possible because of principle limitations. Therefore, individual decisions will remain predictable by means of probabilistic models alone.

  17. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  18. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  19. A Review and Annotated Bibliography of the Literature Pertaining to Team and Small Group Performance (1989 to 1999)

    DTIC Science & Technology

    1999-12-01

    good interventions. This literature could be helpful to those who are planning changes or interventions concerning a wide variety of team-related...factors. Examples of input variables that can potentially be influenced by changes or interventions include the organizational reward system...through local resources or inter -library loans, the number of articles we reviewed was 200. From those 200, we selected approximately 80 articles

  20. Weather models as virtual sensors to data-driven rainfall predictions in urban watersheds

    NASA Astrophysics Data System (ADS)

    Cozzi, Lorenzo; Galelli, Stefano; Pascal, Samuel Jolivet De Marc; Castelletti, Andrea

    2013-04-01

    Weather and climate predictions are a key element of urban hydrology where they are used to inform water management and assist in flood warning delivering. Indeed, the modelling of the very fast dynamics of urbanized catchments can be substantially improved by the use of weather/rainfall predictions. For example, in Singapore Marina Reservoir catchment runoff processes have a very short time of concentration (roughly one hour) and observational data are thus nearly useless for runoff predictions and weather prediction are required. Unfortunately, radar nowcasting methods do not allow to carrying out long - term weather predictions, whereas numerical models are limited by their coarse spatial scale. Moreover, numerical models are usually poorly reliable because of the fast motion and limited spatial extension of rainfall events. In this study we investigate the combined use of data-driven modelling techniques and weather variables observed/simulated with a numerical model as a way to improve rainfall prediction accuracy and lead time in the Singapore metropolitan area. To explore the feasibility of the approach, we use a Weather Research and Forecast (WRF) model as a virtual sensor network for the input variables (the states of the WRF model) to a machine learning rainfall prediction model. More precisely, we combine an input variable selection method and a non-parametric tree-based model to characterize the empirical relation between the rainfall measured at the catchment level and all possible weather input variables provided by WRF model. We explore different lead time to evaluate the model reliability for different long - term predictions, as well as different time lags to see how past information could improve results. Results show that the proposed approach allow a significant improvement of the prediction accuracy of the WRF model on the Singapore urban area.

  1. Learning place cells, grid cells and invariances with excitatory and inhibitory plasticity

    PubMed Central

    2018-01-01

    Neurons in the hippocampus and adjacent brain areas show a large diversity in their tuning to location and head direction, and the underlying circuit mechanisms are not yet resolved. In particular, it is unclear why certain cell types are selective to one spatial variable, but invariant to another. For example, place cells are typically invariant to head direction. We propose that all observed spatial tuning patterns – in both their selectivity and their invariance – arise from the same mechanism: Excitatory and inhibitory synaptic plasticity driven by the spatial tuning statistics of synaptic inputs. Using simulations and a mathematical analysis, we show that combined excitatory and inhibitory plasticity can lead to localized, grid-like or invariant activity. Combinations of different input statistics along different spatial dimensions reproduce all major spatial tuning patterns observed in rodents. Our proposed model is robust to changes in parameters, develops patterns on behavioral timescales and makes distinctive experimental predictions. PMID:29465399

  2. Prediction of Layer Thickness in Molten Borax Bath with Genetic Evolutionary Programming

    NASA Astrophysics Data System (ADS)

    Taylan, Fatih

    2011-04-01

    In this study, the vanadium carbide coating in molten borax bath process is modeled by evolutionary genetic programming (GEP) with bath composition (borax percentage, ferro vanadium (Fe-V) percentage, boric acid percentage), bath temperature, immersion time, and layer thickness data. Five inputs and one output data exist in the model. The percentage of borax, Fe-V, and boric acid, temperature, and immersion time parameters are used as input data and the layer thickness value is used as output data. For selected bath components, immersion time, and temperature variables, the layer thicknesses are derived from the mathematical expression. The results of the mathematical expressions are compared to that of experimental data; it is determined that the derived mathematical expression has an accuracy of 89%.

  3. Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.

  4. Production Function Geometry with "Knightian" Total Product

    ERIC Educational Resources Information Center

    Truett, Dale B.; Truett, Lila J.

    2007-01-01

    Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…

  5. Artificial neural networks for the performance prediction of heat pump hot water heaters

    NASA Astrophysics Data System (ADS)

    Mathioulakis, E.; Panaras, G.; Belessiotis, V.

    2018-02-01

    The rapid progression in the use of heat pumps, due to the decrease in the equipment cost, together with the favourable economics of the consumed electrical energy, has been combined with the wide dissemination of air-to-water heat pumps (AWHPs) in the residential sector. The entrance of the respective systems in the commercial sector has made important the modelling of the processes. In this work, the suitability of artificial neural networks (ANN) in the modelling of AWHPs is investigated. The ambient air temperature in the evaporator inlet and the water temperature in the condenser inlet have been selected as the input variables; energy performance indices and quantities characterising the operation of the system have been selected as output variables. The results verify that the, easy-to-implement, trained ANN can represent an effective tool for the prediction of the AWHP performance in various operation conditions and the parametrical investigation of their behaviour.

  6. The Construct of Attention in Schizophrenia

    PubMed Central

    Luck, Steven J.; Gold, James M.

    2008-01-01

    Schizophrenia is widely thought to involve deficits of attention. However, the term attention can be defined so broadly that impaired performance on virtually any task could be construed as evidence for a deficit in attention, and this has slowed cumulative progress in understanding attention deficits in schizophrenia. To address this problem, we divide the general concept of attention into two distinct constructs: input selection, the selection of task-relevant inputs for further processing; and rule selection, the selective activation of task-appropriate rules. These constructs are closely tied to working memory, because input selection mechanisms are used to control the transfer of information into working memory and because working memory stores the rules used by rule selection mechanisms. These constructs are also closely tied to executive function, because executive systems are used to guide input selection and because rule selection is itself at key aspect of executive function. Within the domain of input selection, it is important to distinguish between the control of selection—the processes that guide attention to task-relevant inputs—and the implementation of selection—the processes that enhance the processing of the relevant inputs and suppress the irrelevant inputs. Current evidence suggests that schizophrenia involves a significant impairment in the control of selection but little or no impairment in the implementation of selection. Consequently, the CNTRICS participants agreed by consensus that attentional control should be a priority target for measurement and treatment research in schizophrenia. PMID:18374901

  7. Micro Computer Feedback Report for the Strategic Leader Development Inventory

    DTIC Science & Technology

    1993-05-01

    POS or NEG variables CALL CREATE MEM DIR ;make a memory directory JC SELS ;exat I error CALL SELECT-SCREEN ;dlsplay select screen JC SEL4 ;no flles in...get keyboaI Input CMP AL,1Bh3 ;ls I an Esc key ? JNZ SEL2 ;X not goto nrod test G-95 JMP SEL4 ;Exit SEL2: CMP AL,OOh Iskapick? JZ SEL ;I YES exit loop...position CALL READ DATE ;gat DOS daoe od 4e CALL F4ND -ERO ;kxlae OW In data ue JC SEL.5 SEL4 : CALL RELEASE MEM DIR ;release meu block CLC ;cler carry fag

  8. A High-Linearity Low-Noise Amplifier with Variable Bandwidth for Neural Recoding Systems

    NASA Astrophysics Data System (ADS)

    Yoshida, Takeshi; Sueishi, Katsuya; Iwata, Atsushi; Matsushita, Kojiro; Hirata, Masayuki; Suzuki, Takafumi

    2011-04-01

    This paper describes a low-noise amplifier with multiple adjustable parameters for neural recording applications. An adjustable pseudo-resistor implemented by cascade metal-oxide-silicon field-effect transistors (MOSFETs) is proposed to achieve low-signal distortion and wide variable bandwidth range. The amplifier has been implemented in 0.18 µm standard complementary metal-oxide-semiconductor (CMOS) process and occupies 0.09 mm2 on chip. The amplifier achieved a selectable voltage gain of 28 and 40 dB, variable bandwidth from 0.04 to 2.6 Hz, total harmonic distortion (THD) of 0.2% with 200 mV output swing, input referred noise of 2.5 µVrms over 0.1-100 Hz and 18.7 µW power consumption at a supply voltage of 1.8 V.

  9. Contrast invariance of orientation tuning in the lateral geniculate nucleus of the feline visual system.

    PubMed

    Viswanathan, Sivaram; Jayakumar, Jaikishan; Vidyasagar, Trichur R

    2015-09-01

    Responses of most neurons in the primary visual cortex of mammals are markedly selective for stimulus orientation and their orientation tuning does not vary with changes in stimulus contrast. The basis of such contrast invariance of orientation tuning has been shown to be the higher variability in the response for low-contrast stimuli. Neurons in the lateral geniculate nucleus (LGN), which provides the major visual input to the cortex, have also been shown to have higher variability in their response to low-contrast stimuli. Parallel studies have also long established mild degrees of orientation selectivity in LGN and retinal cells. In our study, we show that contrast invariance of orientation tuning is already present in the LGN. In addition, we show that the variability of spike responses of LGN neurons increases at lower stimulus contrasts, especially for non-preferred orientations. We suggest that such contrast- and orientation-sensitive variability not only explains the contrast invariance observed in the LGN but can also underlie the contrast-invariant orientation tuning seen at the level of the primary visual cortex. © 2015 Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  10. Genetic algorithm based input selection for a neural network function approximator with applications to SSME health monitoring

    NASA Technical Reports Server (NTRS)

    Peck, Charles C.; Dhawan, Atam P.; Meyer, Claudia M.

    1991-01-01

    A genetic algorithm is used to select the inputs to a neural network function approximator. In the application considered, modeling critical parameters of the space shuttle main engine (SSME), the functional relationship between measured parameters is unknown and complex. Furthermore, the number of possible input parameters is quite large. Many approaches have been used for input selection, but they are either subjective or do not consider the complex multivariate relationships between parameters. Due to the optimization and space searching capabilities of genetic algorithms they were employed to systematize the input selection process. The results suggest that the genetic algorithm can generate parameter lists of high quality without the explicit use of problem domain knowledge. Suggestions for improving the performance of the input selection process are also provided.

  11. A Series of Case Studies of Tinnitus Suppression With Mixed Background Stimuli in a Cochlear Implant

    PubMed Central

    Keiner, A. J.; Walker, Kurt; Deshpande, Aniruddha K.; Witt, Shelley; Killian, Matthijs; Ji, Helena; Patrick, Jim; Dillier, Norbert; van Dijk, Pim; Lai, Wai Kong; Hansen, Marlan R.; Gantz, Bruce

    2015-01-01

    Purpose Background sounds provided by a wearable sound playback device were mixed with the acoustical input picked up by a cochlear implant speech processor in an attempt to suppress tinnitus. Method First, patients were allowed to listen to several sounds and to select up to 4 sounds that they thought might be effective. These stimuli were programmed to loop continuously in the wearable playback device. Second, subjects were instructed to use 1 background sound each day on the wearable device, and they sequenced the selected background sounds during a 28-day trial. Patients were instructed to go to a website at the end of each day and rate the loudness and annoyance of the tinnitus as well as the acceptability of the background sound. Patients completed the Tinnitus Primary Function Questionnaire (Tyler, Stocking, Secor, & Slattery, 2014) at the beginning of the trial. Results Results indicated that background sounds were very effective at suppressing tinnitus. There was considerable variability in sounds preferred by the subjects. Conclusion The study shows that a background sound mixed with the microphone input can be effective for suppressing tinnitus during daily use of the sound processor in selected cochlear implant users. PMID:26001407

  12. Design Choices for Thermofluid Flow Components and Systems that are Exported as Functional Mockup Units

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wetter, Michael; Fuchs, Marcus; Nouidui, Thierry

    This paper discusses design decisions for exporting Modelica thermofluid flow components as Functional Mockup Units. The purpose is to provide guidelines that will allow building energy simulation programs and HVAC equipment manufacturers to effectively use FMUs for modeling of HVAC components and systems. We provide an analysis for direct input-output dependencies of such components and discuss how these dependencies can lead to algebraic loops that are formed when connecting thermofluid flow components. Based on this analysis, we provide recommendations that increase the computing efficiency of such components and systems that are formed by connecting multiple components. We explain what codemore » optimizations are lost when providing thermofluid flow components as FMUs rather than Modelica code. We present an implementation of a package for FMU export of such components, explain the rationale for selecting the connector variables of the FMUs and finally provide computing benchmarks for different design choices. It turns out that selecting temperature rather than specific enthalpy as input and output signals does not lead to a measurable increase in computing time, but selecting nine small FMUs rather than a large FMU increases computing time by 70%.« less

  13. Nonequilibrium air radiation (Nequair) program: User's manual

    NASA Technical Reports Server (NTRS)

    Park, C.

    1985-01-01

    A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.

  14. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  15. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  16. A method for obtaining reduced-order control laws for high-order systems using optimization techniques

    NASA Technical Reports Server (NTRS)

    Mukhopadhyay, V.; Newsom, J. R.; Abel, I.

    1981-01-01

    A method of synthesizing reduced-order optimal feedback control laws for a high-order system is developed. A nonlinear programming algorithm is employed to search for the control law design variables that minimize a performance index defined by a weighted sum of mean-square steady-state responses and control inputs. An analogy with the linear quadractic Gaussian solution is utilized to select a set of design variables and their initial values. To improve the stability margins of the system, an input-noise adjustment procedure is used in the design algorithm. The method is applied to the synthesis of an active flutter-suppression control law for a wind tunnel model of an aeroelastic wing. The reduced-order controller is compared with the corresponding full-order controller and found to provide nearly optimal performance. The performance of the present method appeared to be superior to that of two other control law order-reduction methods. It is concluded that by using the present algorithm, nearly optimal low-order control laws with good stability margins can be synthesized.

  17. Heliocentric interplanetary low thrust trajectory optimization program, supplement 1, part 2

    NASA Technical Reports Server (NTRS)

    Mann, F. I.; Horsewood, J. L.

    1978-01-01

    The improvements made to the HILTOP electric propulsion trajectory computer program are described. A more realistic propulsion system model was implemented in which various thrust subsystem efficiencies and specific impulse are modeled as variable functions of power available to the propulsion system. The number of operating thrusters are staged, and the beam voltage is selected from a set of five (or less) constant voltages, based upon the application of variational calculus. The constant beam voltages may be optimized individually or collectively. The propulsion system logic is activated by a single program input key in such a manner as to preserve the HILTOP logic. An analysis describing these features, a complete description of program input quantities, and sample cases of computer output illustrating the program capabilities are presented.

  18. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  19. Network and neuronal membrane properties in hybrid networks reciprocally regulate selectivity to rapid thalamocortical inputs.

    PubMed

    Pesavento, Michael J; Pinto, David J

    2012-11-01

    Rapidly changing environments require rapid processing from sensory inputs. Varying deflection velocities of a rodent's primary facial vibrissa cause varying temporal neuronal activity profiles within the ventral posteromedial thalamic nucleus. Local neuron populations in a single somatosensory layer 4 barrel transform sparsely coded input into a spike count based on the input's temporal profile. We investigate this transformation by creating a barrel-like hybrid network with whole cell recordings of in vitro neurons from a cortical slice preparation, embedding the biological neuron in the simulated network by presenting virtual synaptic conductances via a conductance clamp. Utilizing the hybrid network, we examine the reciprocal network properties (local excitatory and inhibitory synaptic convergence) and neuronal membrane properties (input resistance) by altering the barrel population response to diverse thalamic input. In the presence of local network input, neurons are more selective to thalamic input timing; this arises from strong feedforward inhibition. Strongly inhibitory (damping) network regimes are more selective to timing and less selective to the magnitude of input but require stronger initial input. Input selectivity relies heavily on the different membrane properties of excitatory and inhibitory neurons. When inhibitory and excitatory neurons had identical membrane properties, the sensitivity of in vitro neurons to temporal vs. magnitude features of input was substantially reduced. Increasing the mean leak conductance of the inhibitory cells decreased the network's temporal sensitivity, whereas increasing excitatory leak conductance enhanced magnitude sensitivity. Local network synapses are essential in shaping thalamic input, and differing membrane properties of functional classes reciprocally modulate this effect.

  20. State observer for synchronous motors

    DOEpatents

    Lang, Jeffrey H.

    1994-03-22

    A state observer driven by measurements of phase voltages and currents for estimating the angular orientation of a rotor of a synchronous motor such as a variable reluctance motor (VRM). Phase voltages and currents are detected and serve as inputs to a state observer. The state observer includes a mathematical model of the electromechanical operation of the synchronous motor. The characteristics of the state observer are selected so that the observer estimates converge to the actual rotor angular orientation and velocity, winding phase flux linkages or currents.

  1. Warpage analysis on thin shell part using glowworm swarm optimisation (GSO)

    NASA Astrophysics Data System (ADS)

    Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    The Autodesk Moldflow Insight (AMI) software was used in this study to focuses on the analysis in plastic injection moulding process associate the input parameter and output parameter. The material used in this study is Acrylonitrile Butadiene Styrene (ABS) as the moulded material to produced the plastic part. The MATLAB sortware is a method was used to find the best setting parameter. The variables was selected in this study were melt temperature, packing pressure, coolant temperature and cooling time.

  2. Rotorcraft Flight Simulation Computer Program C81 with DATAMAP interface. Volume I. User’s Manual

    DTIC Science & Technology

    1981-10-01

    any one of the RWAS tables to simulate the defined effect of that input, care must be exercised to assure that the table used is based on the correct... IMPROVED MANEUVER AUTOPILOT HAVE BEEN INSTALLED IN AGAPBO. A NEW LISTING OF THE CONTENTS OF THE ANALYTICAL DATA BASE WILL BE GENERATED DURING THE WEEK...of the program (Reference 1) has been improved by providing the cap- ability to generate Postprocessing Data Blocks containing selected variables

  3. Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape

    ERIC Educational Resources Information Center

    Stein, Jerrold L.

    2007-01-01

    Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…

  4. The Effects of a Change in the Variability of Irrigation Water

    NASA Astrophysics Data System (ADS)

    Lyon, Kenneth S.

    1983-10-01

    This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."

  5. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  6. The Assessment of Climatological Impacts on Agricultural Production and Residential Energy Demand

    NASA Astrophysics Data System (ADS)

    Cooter, Ellen Jean

    The assessment of climatological impacts on selected economic activities is presented as a multi-step, inter -disciplinary problem. The assessment process which is addressed explicitly in this report focuses on (1) user identification, (2) direct impact model selection, (3) methodological development, (4) product development and (5) product communication. Two user groups of major economic importance were selected for study; agriculture and gas utilities. The broad agricultural sector is further defined as U.S.A. corn production. The general category of utilities is narrowed to Oklahoma residential gas heating demand. The CERES physiological growth model was selected as the process model for corn production. The statistical analysis for corn production suggests that (1) although this is a statistically complex model, it can yield useful impact information, (2) as a result of output distributional biases, traditional statistical techniques are not adequate analytical tools, (3) the model yield distribution as a whole is probably non-Gausian, particularly in the tails and (4) there appears to be identifiable weekly patterns of forecasted yields throughout the growing season. Agricultural quantities developed include point yield impact estimates and distributional characteristics, geographic corn weather distributions, return period estimates, decision making criteria (confidence limits) and time series of indices. These products were communicated in economic terms through the use of a Bayesian decision example and an econometric model. The NBSLD energy load model was selected to represent residential gas heating consumption. A cursory statistical analysis suggests relationships among weather variables across the Oklahoma study sites. No linear trend in "technology -free" modeled energy demand or input weather variables which would correspond to that contained in observed state -level residential energy use was detected. It is suggested that this trend is largely the result of non-weather factors such as population and home usage patterns rather than regional climate change. Year-to-year changes in modeled residential heating demand on the order of 10('6) Btu's per household were determined and later related to state -level components of the Oklahoma economy. Products developed include the definition of regional forecast areas, likelihood estimates of extreme seasonal conditions and an energy/climate index. This information is communicated in economic terms through an input/output model which is used to estimate changes in Gross State Product and Household income attributable to weather variability.

  7. From Input to Intake: Towards a Brain-Based Perspective of Selective Attention.

    ERIC Educational Resources Information Center

    Sato, Edynn; Jacobs, Bob

    1992-01-01

    Addresses, from a neurobiological perspective, the input-intake distinction commonly made in applied linguistics and the role of selective attention in transforming input to intake. The study places primary emphasis upon a neural structure (the nucleus reticularis thalami) that appears to be essential for selective attention. (79 references)…

  8. Multiobjective Optimization of Atmospheric Plasma Spray Process Parameters to Deposit Yttria-Stabilized Zirconia Coatings Using Response Surface Methodology

    NASA Astrophysics Data System (ADS)

    Ramachandran, C. S.; Balasubramanian, V.; Ananthapadmanabhan, P. V.

    2011-03-01

    Atmospheric plasma spraying is used extensively to make Thermal Barrier Coatings of 7-8% yttria-stabilized zirconia powders. The main problem faced in the manufacture of yttria-stabilized zirconia coatings by the atmospheric plasma spraying process is the selection of the optimum combination of input variables for achieving the required qualities of coating. This problem can be solved by the development of empirical relationships between the process parameters (input power, primary gas flow rate, stand-off distance, powder feed rate, and carrier gas flow rate) and the coating quality characteristics (deposition efficiency, tensile bond strength, lap shear bond strength, porosity, and hardness) through effective and strategic planning and the execution of experiments by response surface methodology. This article highlights the use of response surface methodology by designing a five-factor five-level central composite rotatable design matrix with full replication for planning, conduction, execution, and development of empirical relationships. Further, response surface methodology was used for the selection of optimum process parameters to achieve desired quality of yttria-stabilized zirconia coating deposits.

  9. Desktop Application Program to Simulate Cargo-Air-Drop Tests

    NASA Technical Reports Server (NTRS)

    Cuthbert, Peter

    2009-01-01

    The DSS Application is a computer program comprising a Windows version of the UNIX-based Decelerator System Simulation (DSS) coupled with an Excel front end. The DSS is an executable code that simulates the dynamics of airdropped cargo from first motion in an aircraft through landing. The bare DSS is difficult to use; the front end makes it easy to use. All inputs to the DSS, control of execution of the DSS, and postprocessing and plotting of outputs are handled in the front end. The front end is graphics-intensive. The Excel software provides the graphical elements without need for additional programming. Categories of input parameters are divided into separate tabbed windows. Pop-up comments describe each parameter. An error-checking software component evaluates combinations of parameters and alerts the user if an error results. Case files can be created from inputs, making it possible to build cases from previous ones. Simulation output is plotted in 16 charts displayed on a separate worksheet, enabling plotting of multiple DSS cases with flight-test data. Variables assigned to each plot can be changed. Selected input parameters can be edited from the plot sheet for quick sensitivity studies.

  10. Use of Gene Expression Programming in regionalization of flow duration curve

    NASA Astrophysics Data System (ADS)

    Hashmi, Muhammad Z.; Shamseldin, Asaad Y.

    2014-06-01

    In this paper, a recently introduced artificial intelligence technique known as Gene Expression Programming (GEP) has been employed to perform symbolic regression for developing a parametric scheme of flow duration curve (FDC) regionalization, to relate selected FDC characteristics to catchment characteristics. Stream flow records of selected catchments located in the Auckland Region of New Zealand were used. FDCs of the selected catchments were normalised by dividing the ordinates by their median value. Input for the symbolic regression analysis using GEP was (a) selected characteristics of normalised FDCs; and (b) 26 catchment characteristics related to climate, morphology, soil properties and land cover properties obtained using the observed data and GIS analysis. Our study showed that application of this artificial intelligence technique expedites the selection of a set of the most relevant independent variables out of a large set, because these are automatically selected through the GEP process. Values of the FDC characteristics obtained from the developed relationships have high correlations with the observed values.

  11. IDEA: Interactive Display for Evolutionary Analyses.

    PubMed

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-12-08

    The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data.

  12. IDEA: Interactive Display for Evolutionary Analyses

    PubMed Central

    Egan, Amy; Mahurkar, Anup; Crabtree, Jonathan; Badger, Jonathan H; Carlton, Jane M; Silva, Joana C

    2008-01-01

    Background The availability of complete genomic sequences for hundreds of organisms promises to make obtaining genome-wide estimates of substitution rates, selective constraints and other molecular evolution variables of interest an increasingly important approach to addressing broad evolutionary questions. Two of the programs most widely used for this purpose are codeml and baseml, parts of the PAML (Phylogenetic Analysis by Maximum Likelihood) suite. A significant drawback of these programs is their lack of a graphical user interface, which can limit their user base and considerably reduce their efficiency. Results We have developed IDEA (Interactive Display for Evolutionary Analyses), an intuitive graphical input and output interface which interacts with PHYLIP for phylogeny reconstruction and with codeml and baseml for molecular evolution analyses. IDEA's graphical input and visualization interfaces eliminate the need to edit and parse text input and output files, reducing the likelihood of errors and improving processing time. Further, its interactive output display gives the user immediate access to results. Finally, IDEA can process data in parallel on a local machine or computing grid, allowing genome-wide analyses to be completed quickly. Conclusion IDEA provides a graphical user interface that allows the user to follow a codeml or baseml analysis from parameter input through to the exploration of results. Novel options streamline the analysis process, and post-analysis visualization of phylogenies, evolutionary rates and selective constraint along protein sequences simplifies the interpretation of results. The integration of these functions into a single tool eliminates the need for lengthy data handling and parsing, significantly expediting access to global patterns in the data. PMID:19061522

  13. Power selective optical filter devices and optical systems using same

    DOEpatents

    Koplow, Jeffrey P

    2014-10-07

    In an embodiment, a power selective optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes at least one substantially zero-order, zero-wave plate. The zero-order, zero-wave plate is configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. The zero-order, zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.

  14. Correction of I/Q channel errors without calibration

    DOEpatents

    Doerry, Armin W.; Tise, Bertice L.

    2002-01-01

    A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.

  15. High dynamic range charge measurements

    DOEpatents

    De Geronimo, Gianluigi

    2012-09-04

    A charge amplifier for use in radiation sensing includes an amplifier, at least one switch, and at least one capacitor. The switch selectively couples the input of the switch to one of at least two voltages. The capacitor is electrically coupled in series between the input of the amplifier and the input of the switch. The capacitor is electrically coupled to the input of the amplifier without a switch coupled therebetween. A method of measuring charge in radiation sensing includes selectively diverting charge from an input of an amplifier to an input of at least one capacitor by selectively coupling an output of the at least one capacitor to one of at least two voltages. The input of the at least one capacitor is operatively coupled to the input of the amplifier without a switch coupled therebetween. The method also includes calculating a total charge based on a sum of the amplified charge and the diverted charge.

  16. Back propagation artificial neural network for community Alzheimer's disease screening in China.

    PubMed

    Tang, Jun; Wu, Lei; Huang, Helang; Feng, Jiang; Yuan, Yefeng; Zhou, Yueping; Huang, Peng; Xu, Yan; Yu, Chao

    2013-01-25

    Alzheimer's disease patients diagnosed with the Chinese Classification of Mental Disorders diagnostic criteria were selected from the community through on-site sampling. Levels of macro and trace elements were measured in blood samples using an atomic absorption method, and neurotransmitters were measured using a radioimmunoassay method. SPSS 13.0 was used to establish a database, and a back propagation artificial neural network for Alzheimer's disease prediction was simulated using Clementine 12.0 software. With scores of activities of daily living, creatinine, 5-hydroxytryptamine, age, dopamine and aluminum as input variables, the results revealed that the area under the curve in our back propagation artificial neural network was 0.929 (95% confidence interval: 0.868-0.968), sensitivity was 90.00%, specificity was 95.00%, and accuracy was 92.50%. The findings indicated that the results of back propagation artificial neural network established based on the above six variables were satisfactory for screening and diagnosis of Alzheimer's disease in patients selected from the community.

  17. Back propagation artificial neural network for community Alzheimer's disease screening in China★

    PubMed Central

    Tang, Jun; Wu, Lei; Huang, Helang; Feng, Jiang; Yuan, Yefeng; Zhou, Yueping; Huang, Peng; Xu, Yan; Yu, Chao

    2013-01-01

    Alzheimer's disease patients diagnosed with the Chinese Classification of Mental Disorders diagnostic criteria were selected from the community through on-site sampling. Levels of macro and trace elements were measured in blood samples using an atomic absorption method, and neurotransmitters were measured using a radioimmunoassay method. SPSS 13.0 was used to establish a database, and a back propagation artificial neural network for Alzheimer's disease prediction was simulated using Clementine 12.0 software. With scores of activities of daily living, creatinine, 5-hydroxytryptamine, age, dopamine and aluminum as input variables, the results revealed that the area under the curve in our back propagation artificial neural network was 0.929 (95% confidence interval: 0.868–0.968), sensitivity was 90.00%, specificity was 95.00%, and accuracy was 92.50%. The findings indicated that the results of back propagation artificial neural network established based on the above six variables were satisfactory for screening and diagnosis of Alzheimer's disease in patients selected from the community. PMID:25206598

  18. Wood phenology, not carbon input, controls the interannual variability of wood growth in a temperate oak forest.

    PubMed

    Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric

    2016-04-01

    Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  19. Multidimensional density shaping by sigmoids.

    PubMed

    Roth, Z; Baram, Y

    1996-01-01

    An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.

  20. Evaluation of globally available precipitation data products as input for water balance models

    NASA Astrophysics Data System (ADS)

    Lebrenz, H.; Bárdossy, A.

    2009-04-01

    Subject of this study is the evaluation of globally available precipitation data products, which are intended to be used as input variables for water balance models in ungauged basins. The selected data sources are a) the Global Precipitation Climatology Centre (GPCC), b) the Global Precipitation Climatology Project (GPCP) and c) the Climate Research Unit (CRU), resulting into twelve globally available data products. The data products imply different data bases, different derivation routines and varying resolutions in time and space. For validation purposes, the ground data from South Africa were screened on homogeneity and consistency by various tests and an outlier detection using multi-linear regression was performed. External Drift Kriging was subsequently applied on the ground data and the resulting precipitation arrays were compared to the different products with respect to quantity and variance.

  1. Modeling Hurricane Katrina's merchantable timber and wood damage in south Mississippi using remotely sensed and field-measured data

    NASA Astrophysics Data System (ADS)

    Collins, Curtis Andrew

    Ordinary and weighted least squares multiple linear regression techniques were used to derive 720 models predicting Katrina-induced storm damage in cubic foot volume (outside bark) and green weight tons (outside bark). The large number of models was dictated by the use of three damage classes, three product types, and four forest type model strata. These 36 models were then fit and reported across 10 variable sets and variable set combinations for volume and ton units. Along with large model counts, potential independent variables were created using power transforms and interactions. The basis of these variables was field measured plot data, satellite (Landsat TM and ETM+) imagery, and NOAA HWIND wind data variable types. As part of the modeling process, lone variable types as well as two-type and three-type combinations were examined. By deriving models with these varying inputs, model utility is flexible as all independent variable data are not needed in future applications. The large number of potential variables led to the use of forward, sequential, and exhaustive independent variable selection techniques. After variable selection, weighted least squares techniques were often employed using weights of one over the square root of the pre-storm volume or weight of interest. This was generally successful in improving residual variance homogeneity. Finished model fits, as represented by coefficient of determination (R2), surpassed 0.5 in numerous models with values over 0.6 noted in a few cases. Given these models, an analyst is provided with a toolset to aid in risk assessment and disaster recovery should Katrina-like weather events reoccur.

  2. Input Variability Facilitates Unguided Subcategory Learning in Adults

    PubMed Central

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.

    2015-01-01

    Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081

  3. Input Variability Facilitates Unguided Subcategory Learning in Adults.

    PubMed

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E

    2015-06-01

    This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.

  4. Diagnosable structured logic array

    NASA Technical Reports Server (NTRS)

    Whitaker, Sterling (Inventor); Miles, Lowell (Inventor); Gambles, Jody (Inventor); Maki, Gary K. (Inventor)

    2009-01-01

    A diagnosable structured logic array and associated process is provided. A base cell structure is provided comprising a logic unit comprising a plurality of input nodes, a plurality of selection nodes, and an output node, a plurality of switches coupled to the selection nodes, where the switches comprises a plurality of input lines, a selection line and an output line, a memory cell coupled to the output node, and a test address bus and a program control bus coupled to the plurality of input lines and the selection line of the plurality of switches. A state on each of the plurality of input nodes is verifiably loaded and read from the memory cell. A trusted memory block is provided. The associated process is provided for testing and verifying a plurality of truth table inputs of the logic unit.

  5. Variation in active and passive resource inputs to experimental pools: mechanisms and possible consequences for food webs

    USGS Publications Warehouse

    Kraus, Johanna M.; Pletcher, Leanna T.; Vonesh, James R.

    2010-01-01

    1. Cross-ecosystem movements of resources, including detritus, nutrients and living prey, can strongly influence food web dynamics in recipient habitats. Variation in resource inputs is thought to be driven by factors external to the recipient habitat (e.g. donor habitat productivity and boundary conditions). However, inputs of or by ‘active’ living resources may be strongly influenced by recipient habitat quality when organisms exhibit behavioural habitat selection when crossing ecosystem boundaries. 2. To examine whether behavioural responses to recipient habitat quality alter the relative inputs of ‘active’ living and ‘passive’ detrital resources to recipient food webs, we manipulated the presence of caged predatory fish and measured biomass, energy and organic content of inputs to outdoor experimental pools of adult aquatic insects, frog eggs, terrestrial plant matter and terrestrial arthropods. 3. Caged fish reduced the biomass, energy and organic matter donated to pools by tree frog eggs by ∼70%, but did not alter insect colonisation or passive allochthonous inputs of terrestrial arthropods and plant material. Terrestrial plant matter and adult aquatic insects provided the most energy and organic matter inputs to the pools (40–50%), while terrestrial arthropods provided the least (7%). Inputs of frog egg were relatively small but varied considerably among pools and over time (3%, range = 0–20%). Absolute and proportional amounts varied by input type. 4. Aquatic predators can strongly affect the magnitude of active, but not passive, inputs and that the effect of recipient habitat quality on active inputs is variable. Furthermore, some active inputs (i.e. aquatic insect colonists) can provide similar amounts of energy and organic matter as passive inputs of terrestrial plant matter, which are well known to be important. Because inputs differ in quality and the trophic level they subsidise, proportional changes in input type could have strong effects on recipient food webs. 5. Cross-ecosystem resource inputs have previously been characterised as donor-controlled. However, control by the recipient food web could lead to greater feedback between resource flow and consumer dynamics than has been appreciated so far.

  6. Speed control system for an access gate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bzorgi, Fariborz M

    2012-03-20

    An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less

  7. User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures

    USGS Publications Warehouse

    Eberl, D.D.

    2008-01-01

    HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.

  8. Anthropomorphic teleoperation: Controlling remote manipulators with the DataGlove

    NASA Technical Reports Server (NTRS)

    Hale, J. P., II

    1992-01-01

    A two phase effort was conducted to assess the capabilities and limitations of the DataGlove, a lightweight glove input device that can output signals in real-time based on hand shape, orientation, and movement. The first phase was a period for system integration, checkout, and familiarization in a virtual environment. The second phase was a formal experiment using the DataGlove as input device to control the protoflight manipulator arm (PFMA) - a large telerobotic arm with an 8-ft reach. The first phase was used to explore and understand how the DataGlove functions in a virtual environment, build a virtual PFMA, and consider and select a reasonable teleoperation control methodology. Twelve volunteers (six males and six females) participated in a 2 x 3 (x 2) full-factorial formal experiment using the DataGlove to control the PFMA in a simple retraction, slewing, and insertion task. Two within-subjects variables, time delay (0, 1, and 2 seconds) and PFMA wrist flexibility (rigid/flexible), were manipulated. Gender served as a blocking variable. A main effect of time delay was found for slewing and total task times. Correlations among questionnaire responses, and between questionnaire responses and session mean scores and gender were computed. The experimental data were also compared with data collected in another study that used a six degree-of-freedom handcontroller to control the PFMA in the same task. It was concluded that the DataGlove is a legitimate teleoperations input device that provides a natural, intuitive user interface. From an operational point of view, it compares favorably with other 'standard' telerobotic input devices and should be considered in future trades in teleoperation systems' designs.

  9. Spatial and temporal variability of trace element concentrations in an urban subtropical watershed, Honolulu, Hawaii

    USGS Publications Warehouse

    Heinen, De Carlo E.; Anthony, S.S.

    2002-01-01

    Trace metal concentrations in soils and in stream and estuarine sediments from a subtropical urban watershed in Hawaii are presented. The results are placed in the context of historical studies of environmental quality (water, soils, and sediment) in Hawaii to elucidate sources of trace elements and the processes responsible for their distribution. This work builds on earlier studies on sediments of Ala Wai Canal of urban Honolulu by examining spatial and temporal variations in the trace elements throughout the watershed. Natural processes and anthropogenic activity in urban Honolulu contribute to spatial and temporal variations of trace element concentrations throughout the watershed. Enrichment of trace elements in watershed soils result, in some cases, from contributions attributed to the weathering of volcanic rocks, as well as to a more variable anthropogenic input that reflects changes in land use in Honolulu. Varying concentrations of As, Cd, Cu, Pb and Zn in sediments reflect about 60 a of anthropogenic activity in Honolulu. Land use has a strong impact on the spatial distribution and abundance of selected trace elements in soils and stream sediments. As noted in continental US settings, the phasing out of Pb-alkyl fuel additives has decreased Pb inputs to recently deposited estuarine sediments. Yet, a substantial historical anthropogenic Pb inventory remains in soils of the watershed and erosion of surface soils continues to contribute to its enrichment in estuarine sediments. Concentrations of other elements (e.g., Cu, Zn, Cd), however, have not decreased with time, suggesting continued active inputs. Concentrations of Ba, Co, Cr, Ni, V and U, although elevated in some cases, typically reflect greater proportions attributed to natural sources rather than anthropogenic input. ?? 2002 Elsevier Science Ltd. All rights reserved.

  10. Hasse diagram as a green analytical metrics tool: ranking of methods for benzo[a]pyrene determination in sediments.

    PubMed

    Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek

    2016-05-01

    This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits.

  11. Has competition increased hospital technical efficiency?

    PubMed

    Lee, Keon-Hyung; Park, Jungwon; Lim, Seunghoo; Park, Sang-Chul

    2015-01-01

    Hospital competition and managed care have affected the hospital industry in various ways including technical efficiency. Hospital efficiency has become an important topic, and it is important to properly measure hospital efficiency in order to evaluate the impact of policies on the hospital industry. The primary independent variable is hospital competition. By using the 2001-2004 inpatient discharge data from Florida, we calculate the degree of hospital competition in Florida for 4 years. Hospital efficiency scores are developed using the Data Envelopment Analysis and by using the selected input and output variables from the American Hospital Association's Annual Survey of Hospitals for those acute care general hospitals in Florida. By using the hospital efficiency score as a dependent variable, we analyze the effects of hospital competition on hospital efficiency from 2001 to 2004 and find that when a hospital was located in a less competitive market in 2003, its technical efficiency score was lower than those in a more competitive market.

  12. A Monte Carlo investigation of thrust imbalance of solid rocket motor pairs

    NASA Technical Reports Server (NTRS)

    Sforzini, R. H.; Foster, W. A., Jr.; Johnson, J. S., Jr.

    1974-01-01

    A technique is described for theoretical, statistical evaluation of the thrust imbalance of pairs of solid-propellant rocket motors (SRMs) firing in parallel. Sets of the significant variables, determined as a part of the research, are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs. The performance model is upgraded to include the effects of statistical variations in the ovality and alignment of the motor case and mandrel. Effects of cross-correlations of variables are minimized by selecting for the most part completely independent input variables, over forty in number. The imbalance is evaluated in terms of six time - varying parameters as well as eleven single valued ones which themselves are subject to statistical analysis. A sample study of the thrust imbalance of 50 pairs of 146 in. dia. SRMs of the type to be used on the space shuttle is presented. The FORTRAN IV computer program of the analysis and complete instructions for its use are included. Performance computation time for one pair of SRMs is approximately 35 seconds on the IBM 370/155 using the FORTRAN H compiler.

  13. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  14. A stacking ensemble learning framework for annual river ice breakup dates

    NASA Astrophysics Data System (ADS)

    Sun, Wei; Trevor, Bernard

    2018-06-01

    River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.

  15. [Characteristic wavelengths selection of soluble solids content of pear based on NIR spectral and LS-SVM].

    PubMed

    Fan, Shu-xiang; Huang, Wen-qian; Li, Jiang-bo; Zhao, Chun-jiang; Zhang, Bao-hua

    2014-08-01

    To improve the precision and robustness of the NIR model of the soluble solid content (SSC) on pear. The total number of 160 pears was for the calibration (n=120) and prediction (n=40). Different spectral pretreatment methods, including standard normal variate (SNV) and multiplicative scatter correction (MSC) were used before further analysis. A combination of genetic algorithm (GA) and successive projections algorithm (SPA) was proposed to select most effective wavelengths after uninformative variable elimination (UVE) from original spectra, SNV pretreated spectra and MSC pretreated spectra respectively. The selected variables were used as the inputs of least squares-support vector machine (LS-SVM) model to build models for de- termining the SSC of pear. The results indicated that LS-SVM model built using SNVE-UVE-GA-SPA on 30 characteristic wavelengths selected from full-spectrum which had 3112 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.956, 0.271 for SSC. The model is reliable and the predicted result is effective. The method can meet the requirement of quick measuring SSC of pear and might be important for the development of portable instruments and online monitoring.

  16. Exploring the Impact of Different Input Data Types on Soil Variable Estimation Using the ICRAF-ISRIC Global Soil Spectral Database.

    PubMed

    Aitkenhead, Matt J; Black, Helaina I J

    2018-02-01

    Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2  > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.

  17. Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD

    NASA Astrophysics Data System (ADS)

    Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.

    2018-05-01

    In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.

  18. Pharmaceutical manufacturing facility discharges can substantially increase the pharmaceutical load to U.S. wastewaters

    USGS Publications Warehouse

    Scott, Tia-Marie; Phillips, Patrick J.; Kolpin, Dana W.; Colella, Kaitlyn M.; Furlong, Edward T.; Foreman, William T.; Gray, James L.

    2018-01-01

    Discharges from pharmaceutical manufacturing facilities (PMFs) previously have been identified as important sources of pharmaceuticals to the environment. Yet few studies are available to establish the influence of PMFs on the pharmaceutical source contribution to wastewater treatment plants (WWTPs) and waterways at the national scale. Consequently, a national network of 13 WWTPs receiving PMF discharges, six WWTPs with no PMF input, and one WWTP that transitioned through a PMF closure were selected from across the United States to assess the influence of PMF inputs on pharmaceutical loading to WWTPs. Effluent samples were analyzed for 120 pharmaceuticals and pharmaceutical degradates. Of these, 33 pharmaceuticals had concentrations substantially higher in PMF-influenced effluent (maximum 555,000 ng/L) compared to effluent from control sites (maximum 175 ng/L). Concentrations in WWTP receiving PMF input are variable, as discharges from PMFs are episodic, indicating that production activities can vary substantially over relatively short (several months) periods and have the potential to rapidly transition to other pharmaceutical products. Results show that PMFs are an important, national-scale source of pharmaceuticals to the environment.

  19. Electronic stethoscope with frequency shaping and infrasonic recording capabilities.

    PubMed

    Gordon, E S; Lagerwerff, J M

    1976-03-01

    A small electronic stethoscope with variable frequency response characteristics has been developed for aerospace and research applications. The system includes a specially designed piezoelectric pickup and amplifier with an overall frequency response from 0.7 to 5,000 HZ (-3 dB points) and selective bass and treble boost or cut of up to 15 dB. A steep slope, high pass filter can be switched in for ordinary clinical auscultation without overload distortion from strong infrasonic signal inputs. A commercial stethoscope-type headset, selected for best overall response, is used which can adequately handle up to 100 mW of audio power delivered from the amplifier. The active components of the amplifier consist of only four opamp-type integrated circuits.

  20. NLEdit: A generic graphical user interface for Fortran programs

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.

    1994-01-01

    NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.

  1. Orientation selectivity and the functional clustering of synaptic inputs in primary visual cortex

    PubMed Central

    Wilson, Daniel E.; Whitney, David E.; Scholl, Benjamin; Fitzpatrick, David

    2016-01-01

    The majority of neurons in primary visual cortex are tuned for stimulus orientation, but the factors that account for the range of orientation selectivities exhibited by cortical neurons remain unclear. To address this issue, we used in vivo 2-photon calcium imaging to characterize the orientation tuning and spatial arrangement of synaptic inputs to the dendritic spines of individual pyramidal neurons in layer 2/3 of ferret visual cortex. The summed synaptic input to individual neurons reliably predicted the neuron’s orientation preference, but did not account for differences in orientation selectivity among neurons. These differences reflected a robust input-output nonlinearity that could not be explained by spike threshold alone, and was strongly correlated with the spatial clustering of co-tuned synaptic inputs within the dendritic field. Dendritic branches with more co-tuned synaptic clusters exhibited greater rates of local dendritic calcium events supporting a prominent role for functional clustering of synaptic inputs in dendritic nonlinearities that shape orientation selectivity. PMID:27294510

  2. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  3. A neuromorphic VLSI device for implementing 2-D selective attention systems.

    PubMed

    Indiveri, G

    2001-01-01

    Selective attention is a mechanism used to sequentially select and process salient subregions of the input space, while suppressing inputs arriving from nonsalient regions. By processing small amounts of sensory information in a serial fashion, rather than attempting to process all the sensory data in parallel, this mechanism overcomes the problem of flooding limited processing capacity systems with sensory inputs. It is found in many biological systems and can be a useful engineering tool for developing artificial systems that need to process in real-time sensory data. In this paper we present a neuromorphic hardware model of a selective attention mechanism implemented on a very large scale integration (VLSI) chip, using analog circuits. The chip makes use of a spike-based representation for receiving input signals, transmitting output signals and for shifting the selection of the attended input stimulus over time. It can be interfaced to neuromorphic sensors and actuators, for implementing multichip selective attention systems. We describe the characteristics of the circuits used in the architecture and present experimental data measured from the system.

  4. Fast Solution in Sparse LDA for Binary Classification

    NASA Technical Reports Server (NTRS)

    Moghaddam, Baback

    2010-01-01

    An algorithm that performs sparse linear discriminant analysis (Sparse-LDA) finds near-optimal solutions in far less time than the prior art when specialized to binary classification (of 2 classes). Sparse-LDA is a type of feature- or variable- selection problem with numerous applications in statistics, machine learning, computer vision, computational finance, operations research, and bio-informatics. Because of its combinatorial nature, feature- or variable-selection problems are NP-hard or computationally intractable in cases involving more than 30 variables or features. Therefore, one typically seeks approximate solutions by means of greedy search algorithms. The prior Sparse-LDA algorithm was a greedy algorithm that considered the best variable or feature to add/ delete to/ from its subsets in order to maximally discriminate between multiple classes of data. The present algorithm is designed for the special but prevalent case of 2-class or binary classification (e.g. 1 vs. 0, functioning vs. malfunctioning, or change versus no change). The present algorithm provides near-optimal solutions on large real-world datasets having hundreds or even thousands of variables or features (e.g. selecting the fewest wavelength bands in a hyperspectral sensor to do terrain classification) and does so in typical computation times of minutes as compared to days or weeks as taken by the prior art. Sparse LDA requires solving generalized eigenvalue problems for a large number of variable subsets (represented by the submatrices of the input within-class and between-class covariance matrices). In the general (fullrank) case, the amount of computation scales at least cubically with the number of variables and thus the size of the problems that can be solved is limited accordingly. However, in binary classification, the principal eigenvalues can be found using a special analytic formula, without resorting to costly iterative techniques. The present algorithm exploits this analytic form along with the inherent sequential nature of greedy search itself. Together this enables the use of highly-efficient partitioned-matrix-inverse techniques that result in large speedups of computation in both the forward-selection and backward-elimination stages of greedy algorithms in general.

  5. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  6. Intercomparison of spectral irradiance measurements and provision of alternative radiation scheme for CCMs of middle atmosphere

    NASA Astrophysics Data System (ADS)

    Pagaran, Joseph; Weber, Mark; Burrows, John P.

    The Sun's radiative output (total solar irradiance or TSI) determines the thermal structure of the Earth's atmosphere. Its variability is a strong function of wavelength, which drives the photochemistry and general circulation. Contributions to TSI variability from UV wavelengths below 400 nm, i.e. 0.227-day solar rotation or 0.1to be in the 40-60three decades of UV and about a decade of vis-IR observations. Significant progress in UV/vis-IR regions has been achieved with daily monitoring from SCIAMACHY aboard Envisat (ESA) in 2002 and by SIM aboard SORCE (NASA) about a year after. In this contribution, we intercompare SSI measurements from SCIAMACHY and SIM and RGB filters of SPM/VIRGO SoHO: same (a) day and (b) few 27-day time series of spectral measurements in both irradiance and integrated irradiance over selected wavelength intervals. Finally, we show how SSI measurements from GOME, SOLSTICE, in addition to SCIAMACHY and SIM, can be modeled together with solar proxies F10.7 cm, Mg II and sunspot index (PSI) to derive daily SSI variability in the period 1947-2008. The derived variabilities are currently being used as solar input to Bremen's 3D-CTM and are to be recommended as extended alternative to Berlin's FUBRaD radiation scheme. This proxy-based radiation scheme are compared with SATIRE, NRLSSI (or Lean et al.), SUSIM, SSAI (or DeLand et al), and SIP (or Solar2000) models. The use of realistic spectrally resolved solar input to CCMs is to better understand the effects of solar variability on chemistry and temperature in the middle atmosphere over several decades.

  7. Measurand transient signal suppressor

    NASA Technical Reports Server (NTRS)

    Bozeman, Richard J., Jr. (Inventor)

    1994-01-01

    A transient signal suppressor for use in a controls system which is adapted to respond to a change in a physical parameter whenever it crosses a predetermined threshold value in a selected direction of increasing or decreasing values with respect to the threshold value and is sustained for a selected discrete time interval is presented. The suppressor includes a sensor transducer for sensing the physical parameter and generating an electrical input signal whenever the sensed physical parameter crosses the threshold level in the selected direction. A manually operated switch is provided for adapting the suppressor to produce an output drive signal whenever the physical parameter crosses the threshold value in the selected direction of increasing or decreasing values. A time delay circuit is selectively adjustable for suppressing the transducer input signal for a preselected one of a plurality of available discrete suppression time and producing an output signal only if the input signal is sustained for a time greater than the selected suppression time. An electronic gate is coupled to receive the transducer input signal and the timer output signal and produce an output drive signal for energizing a control relay whenever the transducer input is a non-transient signal which is sustained beyond the selected time interval.

  8. Olfactory Bulb Deep Short-Axon Cells Mediate Widespread Inhibition of Tufted Cell Apical Dendrites

    PubMed Central

    LaRocca, Greg

    2017-01-01

    In the main olfactory bulb (MOB), the first station of sensory processing in the olfactory system, GABAergic interneuron signaling shapes principal neuron activity to regulate olfaction. However, a lack of known selective markers for MOB interneurons has strongly impeded cell-type-selective investigation of interneuron function. Here, we identify the first selective marker of glomerular layer-projecting deep short-axon cells (GL-dSACs) and investigate systematically the structure, abundance, intrinsic physiology, feedforward sensory input, neuromodulation, synaptic output, and functional role of GL-dSACs in the mouse MOB circuit. GL-dSACs are located in the internal plexiform layer, where they integrate centrifugal cholinergic input with highly convergent feedforward sensory input. GL-dSAC axons arborize extensively across the glomerular layer to provide highly divergent yet selective output onto interneurons and principal tufted cells. GL-dSACs are thus capable of shifting the balance of principal tufted versus mitral cell activity across large expanses of the MOB in response to diverse sensory and top-down neuromodulatory input. SIGNIFICANCE STATEMENT The identification of cell-type-selective molecular markers has fostered tremendous insight into how distinct interneurons shape sensory processing and behavior. In the main olfactory bulb (MOB), inhibitory circuits regulate the activity of principal cells precisely to drive olfactory-guided behavior. However, selective markers for MOB interneurons remain largely unknown, limiting mechanistic understanding of olfaction. Here, we identify the first selective marker of a novel population of deep short-axon cell interneurons with superficial axonal projections to the sensory input layer of the MOB. Using this marker, together with immunohistochemistry, acute slice electrophysiology, and optogenetic circuit mapping, we reveal that this novel interneuron population integrates centrifugal cholinergic input with broadly tuned feedforward sensory input to modulate principal cell activity selectively. PMID:28003347

  9. Projection of climatic suitability for Aedes albopictus Skuse (Culicidae) in Europe under climate change conditions

    NASA Astrophysics Data System (ADS)

    Fischer, Dominik; Thomas, Stephanie Margarete; Niemitz, Franziska; Reineking, Björn; Beierkuhnlein, Carl

    2011-07-01

    During the last decades the disease vector Aedes albopictus ( Ae. albopictus) has rapidly spread around the globe. The spread of this species raises serious public health concerns. Here, we model the present distribution and the future climatic suitability of Europe for this vector in the face of climate change. In order to achieve the most realistic current prediction and future projection, we compare the performance of four different modelling approaches, differentiated by the selection of climate variables (based on expert knowledge vs. statistical criteria) and by the geographical range of presence records (native range vs. global range). First, models of the native and global range were built with MaxEnt and were either based on (1) statistically selected climatic input variables or (2) input variables selected with expert knowledge from the literature. Native models show high model performance (AUC: 0.91-0.94) for the native range, but do not predict the European distribution well (AUC: 0.70-0.72). Models based on the global distribution of the species, however, were able to identify all regions where Ae. albopictus is currently established, including Europe (AUC: 0.89-0.91). In a second step, the modelled bioclimatic envelope of the global range was projected to future climatic conditions in Europe using two emission scenarios implemented in the regional climate model COSMO-CLM for three time periods 2011-2040, 2041-2070, and 2071-2100. For both global-driven models, the results indicate that climatically suitable areas for the establishment of Ae. albopictus will increase in western and central Europe already in 2011-2040 and with a temporal delay in eastern Europe. On the other hand, a decline in climatically suitable areas in southern Europe is pronounced in the Expert knowledge based model. Our projections appear unaffected by non-analogue climate, as this is not detected by Multivariate Environmental Similarity Surface analysis. The generated risk maps can aid in identifying suitable habitats for Ae. albopictus and hence support monitoring and control activities to avoid disease vector establishment.

  10. Atlas-based automatic measurements of the morphology of the tibiofemoral joint

    NASA Astrophysics Data System (ADS)

    Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.

    2017-03-01

    Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  11. Atlas-based automatic measurements of the morphology of the tibiofemoral joint.

    PubMed

    Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W

    2017-02-11

    Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.

  12. Data driven model generation based on computational intelligence

    NASA Astrophysics Data System (ADS)

    Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus

    2010-05-01

    The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.

  13. Input Variability Facilitates Unguided Subcategory Learning in Adults

    ERIC Educational Resources Information Center

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.

    2015-01-01

    Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…

  14. Assessment of the relationship between rural non-point source pollution and economic development in the Three Gorges Reservoir Area.

    PubMed

    Zhang, Tong; Ni, Jiupai; Xie, Deti

    2016-04-01

    This study investigates the relationship between rural non-point source (NPS) pollution and economic development in the Three Gorges Reservoir Area (TGRA) by using the Environmental Kuznets Curve (EKC) hypothesis for the first time. Five types of pollution indicators, namely, fertilizer input density (FD), pesticide input density (PD), agricultural film input density (AD), grain residues impact (GI), and livestock manure impact (MI), were selected as rural NPS pollutant variables. Rural net income per capita was used as the indicator of economic development. Pollution load was generated by agricultural inputs (consumption of fertilizer, pesticide, and agricultural film) and economic growth with invert U-shaped features. The predicted turning points for FD, PD, and AD were at rural net income per capita levels of 6167.64, 6205.02, and 4955.29 CNY, respectively, which were all surpassed. However, the features between agricultural waste outputs (grain residues and livestock manure) and economic growth were inconsistent with the EKC hypothesis, which reflected the current trends of agricultural economic structure in the TGRA. Given that several other factors aside from economic development level could influence the pollutant generation in rural NPS, a further examination with long-run data support should be performed to understand the relationship between rural NPS pollution and income level.

  15. Apparatus and method for operating internal combustion engines from variable mixtures of gaseous fuels

    DOEpatents

    Heffel, James W [Lake Matthews, CA; Scott, Paul B [Northridge, CA; Park, Chan Seung [Yorba Linda, CA

    2011-11-01

    An apparatus and method for utilizing any arbitrary mixture ratio of multiple fuel gases having differing combustion characteristics, such as natural gas and hydrogen gas, within an internal combustion engine. The gaseous fuel composition ratio is first sensed, such as by thermal conductivity, infrared signature, sound propagation speed, or equivalent mixture differentiation mechanisms and combinations thereof which are utilized as input(s) to a "multiple map" engine control module which modulates selected operating parameters of the engine, such as fuel injection and ignition timing, in response to the proportions of fuel gases available so that the engine operates correctly and at high efficiency irrespective of the gas mixture ratio being utilized. As a result, an engine configured according to the teachings of the present invention may be fueled from at least two different fuel sources without admixing constraints.

  16. Apparatus and method for operating internal combustion engines from variable mixtures of gaseous fuels

    DOEpatents

    Heffel, James W.; Scott, Paul B.

    2003-09-02

    An apparatus and method for utilizing any arbitrary mixture ratio of multiple fuel gases having differing combustion characteristics, such as natural gas and hydrogen gas, within an internal combustion engine. The gaseous fuel composition ratio is first sensed, such as by thermal conductivity, infrared signature, sound propagation speed, or equivalent mixture differentiation mechanisms and combinations thereof which are utilized as input(s) to a "multiple map" engine control module which modulates selected operating parameters of the engine, such as fuel injection and ignition timing, in response to the proportions of fuel gases available so that the engine operates correctly and at high efficiency irrespective of the gas mixture ratio being utilized. As a result, an engine configured according to the teachings of the present invention may be fueled from at least two different fuel sources without admixing constraints.

  17. Sparse distributed memory and related models

    NASA Technical Reports Server (NTRS)

    Kanerva, Pentti

    1992-01-01

    Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.

  18. Preposition accuracy on a sentence repetition task in school age Spanish-English bilinguals.

    PubMed

    Taliancich-Klinger, Casey L; Bedore, Lisa M; Peña, Elizabeth D

    2018-01-01

    Preposition knowledge is important for academic success. The goal of this project was to examine how different variables such as English input and output, Spanish preposition score, mother education level, and age of English exposure (AoEE) may have played a role in children's preposition knowledge in English. 148 Spanish-English children between 7;0 and 9;11 produced prepositions in English and Spanish on a sentence repetition task from an experimental version of the Bilingual English Spanish Assessment Middle Extension (Peña, Bedore, Gutierrez-Clellen, Iglesias & Goldstein, in development). English input and output accounted for most of the variance in English preposition score. The importance of language-specific experiences in the development of prepositions is discussed. Competition for selection of appropriate prepositions in English and Spanish is discussed as potentially influencing low overall preposition scores in English and Spanish.

  19. Torque ripple reduction of brushless DC motor based on adaptive input-output feedback linearization.

    PubMed

    Shirvani Boroujeni, M; Markadeh, G R Arab; Soltani, J

    2017-09-01

    Torque ripple reduction of Brushless DC Motors (BLDCs) is an interesting subject in variable speed AC drives. In this paper at first, a mathematical expression for torque ripple harmonics is obtained. Then for a non-ideal BLDC motor with known harmonic contents of back-EMF, calculation of desired reference current amplitudes, which are required to eliminate some selected harmonics of torque ripple, are reviewed. In order to inject the reference harmonic currents to the motor windings, an Adaptive Input-Output Feedback Linearization (AIOFBL) control is proposed, which generates the reference voltages for three phases voltage source inverter in stationary reference frame. Experimental results are presented to show the capability and validity of the proposed control method and are compared with the vector control in Multi-Reference Frame (MRF) and Pseudo-Vector Control (P-VC) method results. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Preposition accuracy on a sentence repetition task in school age Spanish–English bilinguals*

    PubMed Central

    TALIANCICH-KLINGER, CASEY L.; BEDORE, LISA M.; PEÑA, ELIZABETH D.

    2018-01-01

    Preposition knowledge is important for academic success. The goal of this project was to examine how different variables such as English input and output, Spanish preposition score, mother education level, and age of English exposure (AoEE) may have played a role in children’s preposition knowledge in English. 148 Spanish–English children between 7;0 and 9;11 produced prepositions in English and Spanish on a sentence repetition task from an experimental version of the Bilingual English Spanish Assessment Middle Extension (Peña, Bedore, Gutierrez-Clellen, Iglesias & Goldstein, in development). English input and output accounted for most of the variance in English preposition score. The importance of language-specific experiences in the development of prepositions is discussed. Competition for selection of appropriate prepositions in English and Spanish is discussed as potentially influencing low overall preposition scores in English and Spanish. PMID:28506324

  1. Molecular descriptor subset selection in theoretical peptide quantitative structure-retention relationship model development using nature-inspired optimization algorithms.

    PubMed

    Žuvela, Petar; Liu, J Jay; Macur, Katarzyna; Bączek, Tomasz

    2015-10-06

    In this work, performance of five nature-inspired optimization algorithms, genetic algorithm (GA), particle swarm optimization (PSO), artificial bee colony (ABC), firefly algorithm (FA), and flower pollination algorithm (FPA), was compared in molecular descriptor selection for development of quantitative structure-retention relationship (QSRR) models for 83 peptides that originate from eight model proteins. The matrix with 423 descriptors was used as input, and QSRR models based on selected descriptors were built using partial least squares (PLS), whereas root mean square error of prediction (RMSEP) was used as a fitness function for their selection. Three performance criteria, prediction accuracy, computational cost, and the number of selected descriptors, were used to evaluate the developed QSRR models. The results show that all five variable selection methods outperform interval PLS (iPLS), sparse PLS (sPLS), and the full PLS model, whereas GA is superior because of its lowest computational cost and higher accuracy (RMSEP of 5.534%) with a smaller number of variables (nine descriptors). The GA-QSRR model was validated initially through Y-randomization. In addition, it was successfully validated with an external testing set out of 102 peptides originating from Bacillus subtilis proteomes (RMSEP of 22.030%). Its applicability domain was defined, from which it was evident that the developed GA-QSRR exhibited strong robustness. All the sources of the model's error were identified, thus allowing for further application of the developed methodology in proteomics.

  2. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  3. Do downscaled general circulation models reliably simulate historical climatic conditions?

    USGS Publications Warehouse

    Bock, Andrew R.; Hay, Lauren E.; McCabe, Gregory J.; Markstrom, Steven L.; Atkinson, R. Dwight

    2018-01-01

    The accuracy of statistically downscaled (SD) general circulation model (GCM) simulations of monthly surface climate for historical conditions (1950–2005) was assessed for the conterminous United States (CONUS). The SD monthly precipitation (PPT) and temperature (TAVE) from 95 GCMs from phases 3 and 5 of the Coupled Model Intercomparison Project (CMIP3 and CMIP5) were used as inputs to a monthly water balance model (MWBM). Distributions of MWBM input (PPT and TAVE) and output [runoff (RUN)] variables derived from gridded station data (GSD) and historical SD climate were compared using the Kolmogorov–Smirnov (KS) test For all three variables considered, the KS test results showed that variables simulated using CMIP5 generally are more reliable than those derived from CMIP3, likely due to improvements in PPT simulations. At most locations across the CONUS, the largest differences between GSD and SD PPT and RUN occurred in the lowest part of the distributions (i.e., low-flow RUN and low-magnitude PPT). Results indicate that for the majority of the CONUS, there are downscaled GCMs that can reliably simulate historical climatic conditions. But, in some geographic locations, none of the SD GCMs replicated historical conditions for two of the three variables (PPT and RUN) based on the KS test, with a significance level of 0.05. In these locations, improved GCM simulations of PPT are needed to more reliably estimate components of the hydrologic cycle. Simple metrics and statistical tests, such as those described here, can provide an initial set of criteria to help simplify GCM selection.

  4. Relation between selected water-quality variables and lake level in Upper Klamath and Agency Lakes, Oregon

    USGS Publications Warehouse

    Wood, Tamara M.; Fuhrer, Gregory J.; Morace, Jennifer L.

    1996-01-01

    Based on the analysis of data that they have been collecting for several years, the Klamath Tribes recently recommended that the Bureau of Reclamation (Reclamation) modify the operating plan for the dam to make the minimum lake levels for the June-August period more closely resemble pre-dam conditions (Jacob Kann, written commun., 1995). The U.S. Geological Survey (USGS) was asked to analyze the available data for the lake and to assess whether the evidence exists to conclude that year-to-year differences in certain lake water-quality variables are related to year-to-year differences in lake level. The results of the analysis will be used as scientific input in the process of developing an operating plan for the Link River Dam.

  5. Interdicting an Adversary’s Economy Viewed As a Trade Sanction Inoperability Input Output Model

    DTIC Science & Technology

    2017-03-01

    set of sectors. The design of an economic sanction, in the context of this thesis, is the selection of the sector or set of sectors to sanction...We propose two optimization models. The first, the Trade Sanction Inoperability Input-output Model (TS-IIM), selects the sector or set of sectors that...Interdependency analysis: Extensions to demand reduction inoperability input-output modeling and portfolio selection . Unpublished doctoral dissertation

  6. Prediction of enzyme activity with neural network models based on electronic and geometrical features of substrates.

    PubMed

    Szaleniec, Maciej

    2012-01-01

    Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.

  7. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  8. Missing pulse detector for a variable frequency source

    DOEpatents

    Ingram, Charles B.; Lawhorn, John H.

    1979-01-01

    A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.

  9. Input and language development in bilingually developing children.

    PubMed

    Hoff, Erika; Core, Cynthia

    2013-11-01

    Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  10. Effect of plasma arc welding variables on fusion zone grain size and hardness of AISI 321 austenitic stainless steel

    NASA Astrophysics Data System (ADS)

    Kondapalli, S. P.

    2017-12-01

    In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.

  11. Multifunction Imaging and Spectroscopic Instrument

    NASA Technical Reports Server (NTRS)

    Mouroulis, Pantazis

    2004-01-01

    A proposed optoelectronic instrument would perform several different spectroscopic and imaging functions that, heretofore, have been performed by separate instruments. The functions would be reflectance, fluorescence, and Raman spectroscopies; variable-color confocal imaging at two different resolutions; and wide-field color imaging. The instrument was conceived for use in examination of minerals on remote planets. It could also be used on Earth to characterize material specimens. The conceptual design of the instrument emphasizes compactness and economy, to be achieved largely through sharing of components among subsystems that perform different imaging and spectrometric functions. The input optics for the various functions would be mounted in a single optical head. With the exception of a targeting lens, the input optics would all be aimed at the same spot on a specimen, thereby both (1) eliminating the need to reposition the specimen to perform different imaging and/or spectroscopic observations and (2) ensuring that data from such observations can be correlated with respect to known positions on the specimen. The figure schematically depicts the principal components and subsystems of the instrument. The targeting lens would collect light into a multimode optical fiber, which would guide the light through a fiber-selection switch to a reflection/ fluorescence spectrometer. The switch would have four positions, enabling selection of spectrometer input from the targeting lens, from either of one or two multimode optical fibers coming from a reflectance/fluorescence- microspectrometer optical head, or from a dark calibration position (no fiber). The switch would be the only moving part within the instrument.

  12. Ventricular repolarization variability for hypoglycemia detection.

    PubMed

    Ling, Steve; Nguyen, H T

    2011-01-01

    Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.

  13. The Role of Learner and Input Variables in Learning Inflectional Morphology

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel

    2006-01-01

    To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…

  14. Wideband low-noise variable-gain BiCMOS transimpedance amplifier

    NASA Astrophysics Data System (ADS)

    Meyer, Robert G.; Mack, William D.

    1994-06-01

    A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.

  15. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  16. Retinal Origin of Direction Selectivity in the Superior Colliculus

    PubMed Central

    Shi, Xuefeng; Barchini, Jad; Ledesma, Hector Acaron; Koren, David; Jin, Yanjiao; Liu, Xiaorong; Wei, Wei; Cang, Jianhua

    2017-01-01

    Detecting visual features in the environment such as motion direction is crucial for survival. The circuit mechanisms that give rise to direction selectivity in a major visual center, the superior colliculus (SC), are entirely unknown. Here, we optogenetically isolate the retinal inputs that individual direction-selective SC neurons receive and find that they are already selective as a result of precisely converging inputs from similarly-tuned retinal ganglion cells. The direction selective retinal input is linearly amplified by the intracollicular circuits without changing its preferred direction or level of selectivity. Finally, using 2-photon calcium imaging, we show that SC direction selectivity is dramatically reduced in transgenic mice that have decreased retinal selectivity. Together, our studies demonstrate a retinal origin of direction selectivity in the SC, and reveal a central visual deficit as a consequence of altered feature selectivity in the retina. PMID:28192394

  17. Use of neural networks to model complex immunogenetic associations of disease: human leukocyte antigen impact on the progression of human immunodeficiency virus infection.

    PubMed

    Ioannidis, J P; McQueen, P G; Goedert, J J; Kaslow, R A

    1998-03-01

    Complex immunogenetic associations of disease involving a large number of gene products are difficult to evaluate with traditional statistical methods and may require complex modeling. The authors evaluated the performance of feed-forward backpropagation neural networks in predicting rapid progression to acquired immunodeficiency syndrome (AIDS) for patients with human immunodeficiency virus (HIV) infection on the basis of major histocompatibility complex variables. Networks were trained on data from patients from the Multicenter AIDS Cohort Study (n = 139) and then validated on patients from the DC Gay cohort (n = 102). The outcome of interest was rapid disease progression, defined as progression to AIDS in <6 years from seroconversion. Human leukocyte antigen (HLA) variables were selected as network inputs with multivariate regression and a previously described algorithm selecting markers with extreme point estimates for progression risk. Network performance was compared with that of logistic regression. Networks with 15 HLA inputs and a single hidden layer of five nodes achieved a sensitivity of 87.5% and specificity of 95.6% in the training set, vs. 77.0% and 76.9%, respectively, achieved by logistic regression. When validated on the DC Gay cohort, networks averaged a sensitivity of 59.1% and specificity of 74.3%, vs. 53.1% and 61.4%, respectively, for logistic regression. Neural networks offer further support to the notion that HIV disease progression may be dependent on complex interactions between different class I and class II alleles and transporters associated with antigen processing variants. The effect in the current models is of moderate magnitude, and more data as well as other host and pathogen variables may need to be considered to improve the performance of the models. Artificial intelligence methods may complement linear statistical methods for evaluating immunogenetic associations of disease.

  18. A Framework for Orbital Performance Evaluation in Distributed Space Missions for Earth Observation

    NASA Technical Reports Server (NTRS)

    Nag, Sreeja; LeMoigne-Stewart, Jacqueline; Miller, David W.; de Weck, Olivier

    2015-01-01

    Distributed Space Missions (DSMs) are gaining momentum in their application to earth science missions owing to their unique ability to increase observation sampling in spatial, spectral and temporal dimensions simultaneously. DSM architectures have a large number of design variables and since they are expected to increase mission flexibility, scalability, evolvability and robustness, their design is a complex problem with many variables and objectives affecting performance. There are very few open-access tools available to explore the tradespace of variables which allow performance assessment and are easy to plug into science goals, and therefore select the most optimal design. This paper presents a software tool developed on the MATLAB engine interfacing with STK, for DSM orbit design and selection. It is capable of generating thousands of homogeneous constellation or formation flight architectures based on pre-defined design variable ranges and sizing those architectures in terms of predefined performance metrics. The metrics can be input into observing system simulation experiments, as available from the science teams, allowing dynamic coupling of science and engineering designs. Design variables include but are not restricted to constellation type, formation flight type, FOV of instrument, altitude and inclination of chief orbits, differential orbital elements, leader satellites, latitudes or regions of interest, planes and satellite numbers. Intermediate performance metrics include angular coverage, number of accesses, revisit coverage, access deterioration over time at every point of the Earth's grid. The orbit design process can be streamlined and variables more bounded along the way, owing to the availability of low fidelity and low complexity models such as corrected HCW equations up to high precision STK models with J2 and drag. The tool can thus help any scientist or program manager select pre-Phase A, Pareto optimal DSM designs for a variety of science goals without having to delve into the details of the engineering design process.

  19. Estimating stand structure using discrete-return lidar: an example from low density, fire prone ponderosa pine forests

    USGS Publications Warehouse

    Hall, S. A.; Burke, I.C.; Box, D. O.; Kaufmann, M. R.; Stoker, Jason M.

    2005-01-01

    The ponderosa pine forests of the Colorado Front Range, USA, have historically been subjected to wildfires. Recent large burns have increased public interest in fire behavior and effects, and scientific interest in the carbon consequences of wildfires. Remote sensing techniques can provide spatially explicit estimates of stand structural characteristics. Some of these characteristics can be used as inputs to fire behavior models, increasing our understanding of the effect of fuels on fire behavior. Others provide estimates of carbon stocks, allowing us to quantify the carbon consequences of fire. Our objective was to use discrete-return lidar to estimate such variables, including stand height, total aboveground biomass, foliage biomass, basal area, tree density, canopy base height and canopy bulk density. We developed 39 metrics from the lidar data, and used them in limited combinations in regression models, which we fit to field estimates of the stand structural variables. We used an information–theoretic approach to select the best model for each variable, and to select the subset of lidar metrics with most predictive potential. Observed versus predicted values of stand structure variables were highly correlated, with r2 ranging from 57% to 87%. The most parsimonious linear models for the biomass structure variables, based on a restricted dataset, explained between 35% and 58% of the observed variability. Our results provide us with useful estimates of stand height, total aboveground biomass, foliage biomass and basal area. There is promise for using this sensor to estimate tree density, canopy base height and canopy bulk density, though more research is needed to generate robust relationships. We selected 14 lidar metrics that showed the most potential as predictors of stand structure. We suggest that the focus of future lidar studies should broaden to include low density forests, particularly systems where the vertical structure of the canopy is important, such as fire prone forests.

  20. [Prediction of regional soil quality based on mutual information theory integrated with decision tree algorithm].

    PubMed

    Lin, Fen-Fang; Wang, Ke; Yang, Ning; Yan, Shi-Guang; Zheng, Xin-Yu

    2012-02-01

    In this paper, some main factors such as soil type, land use pattern, lithology type, topography, road, and industry type that affect soil quality were used to precisely obtain the spatial distribution characteristics of regional soil quality, mutual information theory was adopted to select the main environmental factors, and decision tree algorithm See 5.0 was applied to predict the grade of regional soil quality. The main factors affecting regional soil quality were soil type, land use, lithology type, distance to town, distance to water area, altitude, distance to road, and distance to industrial land. The prediction accuracy of the decision tree model with the variables selected by mutual information was obviously higher than that of the model with all variables, and, for the former model, whether of decision tree or of decision rule, its prediction accuracy was all higher than 80%. Based on the continuous and categorical data, the method of mutual information theory integrated with decision tree could not only reduce the number of input parameters for decision tree algorithm, but also predict and assess regional soil quality effectively.

  1. Method and apparatus for manufacturing gas tags

    DOEpatents

    Gross, K.C.; Laug, M.T.

    1996-12-17

    For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases. 4 figs.

  2. Method and apparatus for manufacturing gas tags

    DOEpatents

    Gross, Kenny C.; Laug, Matthew T.

    1996-01-01

    For use in the manufacture of gas tags employed in a gas tagging failure detection system for a nuclear reactor, a plurality of commercial feed gases each having a respective noble gas isotopic composition are blended under computer control to provide various tag gas mixtures having selected isotopic ratios which are optimized for specified defined conditions such as cost. Using a new approach employing a discrete variable structure rather than the known continuous-variable optimization problem, the computer controlled gas tag manufacturing process employs an analytical formalism from condensed matter physics known as stochastic relaxation, which is a special case of simulated annealing, for input feed gas selection. For a tag blending process involving M tag isotopes with N distinct feed gas mixtures commercially available from an enriched gas supplier, the manufacturing process calculates the cost difference between multiple combinations and specifies gas mixtures which approach the optimum defined conditions. The manufacturing process is then used to control tag blending apparatus incorporating tag gas canisters connected by stainless-steel tubing with computer controlled valves, with the canisters automatically filled with metered quantities of the required feed gases.

  3. Partial Granger causality--eliminating exogenous inputs and latent variables.

    PubMed

    Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng

    2008-07-15

    Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.

  4. Development of a Robust Identifier for NPPs Transients Combining ARIMA Model and EBP Algorithm

    NASA Astrophysics Data System (ADS)

    Moshkbar-Bakhshayesh, Khalil; Ghofrani, Mohammad B.

    2014-08-01

    This study introduces a novel identification method for recognition of nuclear power plants (NPPs) transients by combining the autoregressive integrated moving-average (ARIMA) model and the neural network with error backpropagation (EBP) learning algorithm. The proposed method consists of three steps. First, an EBP based identifier is adopted to distinguish the plant normal states from the faulty ones. In the second step, ARIMA models use integrated (I) process to convert non-stationary data of the selected variables into stationary ones. Subsequently, ARIMA processes, including autoregressive (AR), moving-average (MA), or autoregressive moving-average (ARMA) are used to forecast time series of the selected plant variables. In the third step, for identification the type of transients, the forecasted time series are fed to the modular identifier which has been developed using the latest advances of EBP learning algorithm. Bushehr nuclear power plant (BNPP) transients are probed to analyze the ability of the proposed identifier. Recognition of transient is based on similarity of its statistical properties to the reference one, rather than the values of input patterns. More robustness against noisy data and improvement balance between memorization and generalization are salient advantages of the proposed identifier. Reduction of false identification, sole dependency of identification on the sign of each output signal, selection of the plant variables for transients training independent of each other, and extendibility for identification of more transients without unfavorable effects are other merits of the proposed identifier.

  5. Variability, drivers, and effects of atmospheric nitrogen inputs across an urban area: Emerging patterns among human activities, the atmosphere, and soils.

    PubMed

    Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti

    2017-12-31

    Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  7. Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment

    ERIC Educational Resources Information Center

    Alejo, Rafael; Piquer-Píriz, Ana

    2016-01-01

    The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…

  8. Variable Input and the Acquisition of Plural Morphology

    ERIC Educational Resources Information Center

    Miller, Karen L.; Schmitt, Cristina

    2012-01-01

    The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…

  9. CalSimHydro Tool - A Web-based interactive tool for the CalSim 3.0 Hydrology Prepropessor

    NASA Astrophysics Data System (ADS)

    Li, P.; Stough, T.; Vu, Q.; Granger, S. L.; Jones, D. J.; Ferreira, I.; Chen, Z.

    2011-12-01

    CalSimHydro, the CalSim 3.0 Hydrology Preprocessor, is an application designed to automate the various steps in the computation of hydrologic inputs for CalSim 3.0, a water resources planning model developed jointly by California State Department of Water Resources and United States Bureau of Reclamation, Mid-Pacific Region. CalSimHydro consists of a five-step FORTRAN based program that runs the individual models in succession passing information from one model to the next and aggregating data as required by each model. The final product of CalSimHydro is an updated CalSim 3.0 state variable (SV) DSS input file. CalSimHydro consists of (1) a Rainfall-Runoff Model to compute monthly infiltration, (2) a Soil moisture and demand calculator (IDC) that estimates surface runoff, deep percolation, and water demands for natural vegetation cover and various crops other than rice, (3) a Rice Water Use Model to compute the water demands, deep percolation, irrigation return flow, and runoff from precipitation for the rice fields, (4) a Refuge Water Use Model that simulates the ponding operations for managed wetlands, and (5) a Data Aggregation and Transfer Module to aggregate the outputs from the above modules and transfer them to the CalSim SV input file. In this presentation, we describe a web-based user interface for CalSimHydro using Google Earth Plug-In. The CalSimHydro tool allows users to - interact with geo-referenced layers of the Water Budget Areas (WBA) and Demand Units (DU) displayed over the Sacramento Valley, - view the input parameters of the hydrology preprocessor for a selected WBA or DU in a time series plot or a tabular form, - edit the values of the input parameters in the table or by downloading a spreadsheet of the selected parameter in a selected time range, - run the CalSimHydro modules in the backend server and notify the user when the job is done, - visualize the model output and compare it with a base run result, - download the output SV file to be used to run CalSim 3.0. The CalSimHydro tool streamlines the complicated steps to configure and run the hydrology preprocessor by providing a user-friendly visual interface and back-end services to validate user inputs and manage the model execution. It is a powerful addition to the new CalSim 3.0 system.

  10. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  11. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  12. Olfactory Bulb Deep Short-Axon Cells Mediate Widespread Inhibition of Tufted Cell Apical Dendrites.

    PubMed

    Burton, Shawn D; LaRocca, Greg; Liu, Annie; Cheetham, Claire E J; Urban, Nathaniel N

    2017-02-01

    In the main olfactory bulb (MOB), the first station of sensory processing in the olfactory system, GABAergic interneuron signaling shapes principal neuron activity to regulate olfaction. However, a lack of known selective markers for MOB interneurons has strongly impeded cell-type-selective investigation of interneuron function. Here, we identify the first selective marker of glomerular layer-projecting deep short-axon cells (GL-dSACs) and investigate systematically the structure, abundance, intrinsic physiology, feedforward sensory input, neuromodulation, synaptic output, and functional role of GL-dSACs in the mouse MOB circuit. GL-dSACs are located in the internal plexiform layer, where they integrate centrifugal cholinergic input with highly convergent feedforward sensory input. GL-dSAC axons arborize extensively across the glomerular layer to provide highly divergent yet selective output onto interneurons and principal tufted cells. GL-dSACs are thus capable of shifting the balance of principal tufted versus mitral cell activity across large expanses of the MOB in response to diverse sensory and top-down neuromodulatory input. The identification of cell-type-selective molecular markers has fostered tremendous insight into how distinct interneurons shape sensory processing and behavior. In the main olfactory bulb (MOB), inhibitory circuits regulate the activity of principal cells precisely to drive olfactory-guided behavior. However, selective markers for MOB interneurons remain largely unknown, limiting mechanistic understanding of olfaction. Here, we identify the first selective marker of a novel population of deep short-axon cell interneurons with superficial axonal projections to the sensory input layer of the MOB. Using this marker, together with immunohistochemistry, acute slice electrophysiology, and optogenetic circuit mapping, we reveal that this novel interneuron population integrates centrifugal cholinergic input with broadly tuned feedforward sensory input to modulate principal cell activity selectively. Copyright © 2017 the authors 0270-6474/17/371117-22$15.00/0.

  13. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2011-07-01

    The degree of belief we have in predictions from hydrologic models will normally depend on how well they can reproduce observations. Calibrations with traditional performance measures, such as the Nash-Sutcliffe model efficiency, are challenged by problems including: (1) uncertain discharge data, (2) variable sensitivity of different performance measures to different flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. This paper explores a calibration method using flow-duration curves (FDCs) to address these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) on the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application, e.g. using more/less EPs at high/low flows. While the method appears less sensitive to epistemic input/output errors than previous use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow and where peak-flow timing at sub-daily time scales is of high importance. The results suggest that the calibration method can be useful when observation time periods for discharge and model input data do not overlap. The method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  14. Calibration of hydrological models using flow-duration curves

    NASA Astrophysics Data System (ADS)

    Westerberg, I. K.; Guerrero, J.-L.; Younger, P. M.; Beven, K. J.; Seibert, J.; Halldin, S.; Freer, J. E.; Xu, C.-Y.

    2010-12-01

    The degree of belief we have in predictions from hydrologic models depends on how well they can reproduce observations. Calibrations with traditional performance measures such as the Nash-Sutcliffe model efficiency are challenged by problems including: (1) uncertain discharge data, (2) variable importance of the performance with flow magnitudes, (3) influence of unknown input/output errors and (4) inability to evaluate model performance when observation time periods for discharge and model input data do not overlap. A new calibration method using flow-duration curves (FDCs) was developed which addresses these problems. The method focuses on reproducing the observed discharge frequency distribution rather than the exact hydrograph. It consists of applying limits of acceptability for selected evaluation points (EPs) of the observed uncertain FDC in the extended GLUE approach. Two ways of selecting the EPs were tested - based on equal intervals of discharge and of volume of water. The method was tested and compared to a calibration using the traditional model efficiency for the daily four-parameter WASMOD model in the Paso La Ceiba catchment in Honduras and for Dynamic TOPMODEL evaluated at an hourly time scale for the Brue catchment in Great Britain. The volume method of selecting EPs gave the best results in both catchments with better calibrated slow flow, recession and evaporation than the other criteria. Observed and simulated time series of uncertain discharges agreed better for this method both in calibration and prediction in both catchments without resulting in overpredicted simulated uncertainty. An advantage with the method is that the rejection criterion is based on an estimation of the uncertainty in discharge data and that the EPs of the FDC can be chosen to reflect the aims of the modelling application e.g. using more/less EPs at high/low flows. While the new method is less sensitive to epistemic input/output errors than the normal use of limits of acceptability applied directly to the time series of discharge, it still requires a reasonable representation of the distribution of inputs. Additional constraints might therefore be required in catchments subject to snow. The results suggest that the new calibration method can be useful when observation time periods for discharge and model input data do not overlap. The new method could also be suitable for calibration to regional FDCs while taking uncertainties in the hydrological model and data into account.

  15. A method to estimate weight and dimensions of aircraft gas turbine engines. Volume 1: Method of analysis

    NASA Technical Reports Server (NTRS)

    Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.

    1977-01-01

    Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.

  16. Modeling of local sea level rise and its future projection under climate change using regional information through EOF analysis

    NASA Astrophysics Data System (ADS)

    Naren, A.; Maity, Rajib

    2017-12-01

    Sea level rise is one of the manifestations of climate change and may cause a threat to the coastal regions. Estimates from global circulation models (GCMs) are either not available on coastal locations due to their coarse spatial resolution or not reliable since the mismatch between (interpolated) GCM estimates at coastal locations and actual observation over historical period is significantly different. We propose a semi-empirical framework to model the local sea level rise (SLR) using the possibly existing relationship between local SLR and regional atmospheric/oceanic variables. Selection of set of input variables mostly based on the literature bears the signature of both atmospheric and oceanic variables that possibly have an effect on SLR. The proposed approach offers a method to extract the combined information hidden in the regional fields of atmospheric/oceanic variables for a specific target coastal location. Generality of the approach ensures the inclusion of more variables in the set of inputs depending on the geographical location of any coastal station. For demonstration, 14 coastal locations along the Indian coast and islands are considered and a set of regional atmospheric and oceanic variables are considered. After development and validation of the model at each coastal location with the historical data, the model is further used for future projection of local SLR up to the year 2100 for three different future emission scenarios represented by representative concentration pathways (RCPs)—RCP2.6, RCP4.5, and RCP8.5. The maximum projected SLR is found to vary from 260.65 to 393.16 mm (RCP8.5) by the end of 2100 among the locations considered. Outcome of the proposed approach is expected to be useful in regional coastal management and in developing mitigation strategies in a changing climate.

  17. Automatic design of basin-specific drought indexes for highly regulated water systems

    NASA Astrophysics Data System (ADS)

    Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel

    2018-04-01

    Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.

  18. Regenerative braking device with rotationally mounted energy storage means

    DOEpatents

    Hoppie, Lyle O.

    1982-03-16

    A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.

  19. A Secure and Reliable High-Performance Field Programmable Gate Array for Information Processing

    DTIC Science & Technology

    2012-03-01

    receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select an input port. The input...dual of a merge. It receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select...Transactions on Computer-Aided Design of Intergrated Circuits and Systems, Vol. 26, No. 2, February 2007. [12] Cadence Design Systems, “Clock Domain

  20. The SYSGEN user package

    NASA Technical Reports Server (NTRS)

    Carlson, C. R.

    1981-01-01

    The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.

  1. Automatic differentiation evaluated as a tool for rotorcraft design and optimization

    NASA Technical Reports Server (NTRS)

    Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    This paper investigates the use of automatic differentiation (AD) as a means for generating sensitivity analyses in rotorcraft design and optimization. This technique transforms an existing computer program into a new program that performs sensitivity analysis in addition to the original analysis. The original FORTRAN program calculates a set of dependent (output) variables from a set of independent (input) variables, the new FORTRAN program calculates the partial derivatives of the dependent variables with respect to the independent variables. The AD technique is a systematic implementation of the chain rule of differentiation, this method produces derivatives to machine accuracy at a cost that is comparable with that of finite-differencing methods. For this study, an analysis code that consists of the Langley-developed hover analysis HOVT, the comprehensive rotor analysis CAMRAD/JA, and associated preprocessors is processed through the AD preprocessor ADIFOR 2.0. The resulting derivatives are compared with derivatives obtained from finite-differencing techniques. The derivatives obtained with ADIFOR 2.0 are exact within machine accuracy and do not depend on the selection of step-size, as are the derivatives obtained with finite-differencing techniques.

  2. iTOUGH2 Universal Optimization Using the PEST Protocol

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Finsterle, S.A.

    2010-07-01

    iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2more » is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models, the PEST protocol [Doherty, 2007] has been implemented into iTOUGH2. This protocol enables communication between the application (which can be a single 'black-box' executable or a script or batch file that calls multiple codes) and iTOUGH2. The concept requires that for the application model: (1) Input is provided on one or more ASCII text input files; (2) Output is returned to one or more ASCII text output files; (3) The model is run using a system command (executable or script/batch file); and (4) The model runs to completion without any user intervention. For each forward run invoked by iTOUGH2, select parameters cited within the application model input files are then overwritten with values provided by iTOUGH2, and select variables cited within the output files are extracted and returned to iTOUGH2. It should be noted that the core of iTOUGH2, i.e., its optimization routines and related analysis tools, remains unchanged; it is only the communication format between input parameters, the application model, and output variables that are borrowed from PEST. The interface routines have been provided by Doherty [2007]. The iTOUGH2-PEST architecture is shown in Figure 1. This manual contains installation instructions for the iTOUGH2-PEST module, and describes the PEST protocol as well as the input formats needed in iTOUGH2. Examples are provided that demonstrate the use of model-independent optimization and analysis using iTOUGH2.« less

  3. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  4. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  5. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  6. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  7. Spatially Distributed Dendritic Resonance Selectively Filters Synaptic Input

    PubMed Central

    Segev, Idan; Shamma, Shihab

    2014-01-01

    An important task performed by a neuron is the selection of relevant inputs from among thousands of synapses impinging on the dendritic tree. Synaptic plasticity enables this by strenghtening a subset of synapses that are, presumably, functionally relevant to the neuron. A different selection mechanism exploits the resonance of the dendritic membranes to preferentially filter synaptic inputs based on their temporal rates. A widely held view is that a neuron has one resonant frequency and thus can pass through one rate. Here we demonstrate through mathematical analyses and numerical simulations that dendritic resonance is inevitably a spatially distributed property; and therefore the resonance frequency varies along the dendrites, and thus endows neurons with a powerful spatiotemporal selection mechanism that is sensitive both to the dendritic location and the temporal structure of the incoming synaptic inputs. PMID:25144440

  8. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  9. Predicting the effects of magnesium oxide nanoparticles and temperature on the thermal conductivity of water using artificial neural network and experimental data

    NASA Astrophysics Data System (ADS)

    Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid

    2017-03-01

    The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.

  10. Simultaneous use of geological, geophysical, and LANDSAT digital data in uranium exploration. [Libya

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Missallati, A.; Prelat, A.E.; Lyon, R.J.P.

    1979-08-01

    The simultaneous use of geological, geophysical and Landsat data in uranium exploration in southern Libya is reported. The values of 43 geological, geophysical and digital data variables, including age and type of rock, geological contacts, aeroradio-metric and aeromagnetic values and brightness ratios, were used as input into a geomathematical model. Stepwise discriminant analysis was used to select grid cells most favorable for detailed mineral exploration and to evaluate the significance of each variable in discriminating between the anomalous (radioactive) and nonanomalous (nonradioactive) areas. It is found that the geological contact relationships, Landsat Bands 6 and Band 7/4 ratio values weremore » most useful in the discrimination. The procedure was found to be statistically and geologically reliable, and applicable to similar regions using only the most important geological and Landsat data.« less

  11. Egg buoyancy variability in local populations of Atlantic cod (Gadus morhua).

    PubMed

    Jung, Kyung-Mi; Folkvord, Arild; Kjesbu, Olav Sigurd; Agnalt, Ann Lisbeth; Thorsen, Anders; Sundby, Svein

    2012-01-01

    Previous studies have found strong evidences for Atlantic cod ( Gadus morhua ) egg retention in fjords, which are caused by the combination of vertical salinity structure, estuarine circulation, and egg specific gravity, supporting small-scaled geographical differentiations of local populations. Here, we assess the variability in egg specific gravity for selected local populations of this species, that is, two fjord-spawning populations and one coastal-spawning population from Northern Norway (66-71°N/10-25°E). Eggs were naturally spawned by raised broodstocks (March to April 2009), and egg specific gravity was measured by a density-gradient column. The phenotype of egg specific gravity was similar among the three local populations. However, the associated variability was greater at the individual level than at the population level. The noted gradual decrease in specific gravity from gastrulation to hatching with an increase just before hatching could be a generic pattern in pelagic marine fish eggs. This study provides needed input to adequately understand and model fish egg dispersal.

  12. Response of winter and spring wheat grain yields to meteorological variation

    NASA Technical Reports Server (NTRS)

    Feyerherm, A. M.; Kanemasu, E. T.; Paulsen, G. M.

    1977-01-01

    Mathematical models which quantify the relation of wheat yield to selected weather-related variables are presented. Other sources of variation (amount of applied nitrogen, improved varieties, cultural practices) have been incorporated in the models to explain yield variation both singly and in combination with weather-related variables. Separate models were developed for fall-planted (winter) and spring-planted (spring) wheats. Meteorological variation is observed, basically, by daily measurements of minimum and maximum temperatures, precipitation, and tabled values of solar radiation at the edge of the atmosphere and daylength. Two different soil moisture budgets are suggested to compute simulated values of evapotranspiration; one uses the above-mentioned inputs, the other uses the measured temperatures and precipitation but replaces the tabled values (solar radiation and daylength) by measured solar radiation and satellite-derived multispectral scanner data to estimate leaf area index. Weather-related variables are defined by phenological stages, rather than calendar periods, to make the models more universally applicable.

  13. Large-area landslide susceptibility with optimized slope-units

    NASA Astrophysics Data System (ADS)

    Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto

    2017-04-01

    A Slope-Unit (SU) is a type of morphological terrain unit bounded by drainage and divide lines that maximize the within-unit homogeneity and the between-unit heterogeneity across distinct physical and geographical boundaries [1]. Compared to other terrain subdivisions, SU are morphological terrain unit well related to the natural (i.e., geological, geomorphological, hydrological) processes that shape and characterize natural slopes. This makes SU easily recognizable in the field or in topographic base maps, and well suited for environmental and geomorphological analysis, in particular for landslide susceptibility (LS) modelling. An optimal subdivision of an area into a set of SU depends on multiple factors: size and complexity of the study area, quality and resolution of the available terrain elevation data, purpose of the terrain subdivision, scale and resolution of the phenomena for which SU are delineated. We use the recently developed r.slopeunits software [2,3] for the automatic, parametric delineation of SU within the open source GRASS GIS based on terrain elevation data and a small number of user-defined parameters. The software provides subdivisions consisting of SU with different shapes and sizes, as a function of the input parameters. In this work, we describe a procedure for the optimal selection of the user parameters through the production of a large number of realizations of the LS model. We tested the software and the optimization procedure in a 2,000 km2 area in Umbria, Central Italy. For LS zonation we adopt a logistic regression model implemented in an well-known software [4,5], using about 50 independent variables. To select the optimal SU partition for LS zonation, we want to define a metric which is able to quantify simultaneously: (i) slope-unit internal homogeneity (ii) slope-unit external heterogeneity (iii) landslide susceptibility model performance. To this end, we define a comprehensive objective function S, as the product of three normalized objective functions dealing with the points (i)-(ii)-(iii) independently. We use an intra-segment variance function V, the Moran's autocorrelation index I and the AUCROC function R arising from the application of the logistic regression model. Maximization of the objective function S = f(I,V,R) as a function of the r.slopeunits input parameters provides an objective and reproducible way to select the optimal parameter combination for a proper SU subdivision for LS modelling. We further perform an analysis of the statistical significance of the LS models as a function of the r.slopeunits input parameters, focusing on the degree of coarseness of each subdivision. We find that the LRM, when applied to subdivisions with large average SU size, has a very poor statistical significance, resulting in only few (5%, typically lithological) variables being used in the regression due to the large heterogeneity of all variables within each unit, while up to 35% of the variables are used when SU are very small. This behavior was largely expected and provides further evidence that an objective method to select SU size is highly desirable. [1] Guzzetti, F. et al., Geomorphology 31, (1999) 181-216 [2] Alvioli, M. et al., Geoscientific Model Development 9 (2016), 3975-3991 [3] http://geomorphology.irpi.cnr.it/tools/slope-units [4] Rossi, M. et al., Geomorphology 114, (2010) 129-142 [5] Rossi, M. and Reichenbach, P., Geoscientific Model Development 9 (2016), 3533-3543

  14. Fabric filter model sensitivity analysis. Final report Jun 1978-Feb 1979

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennis, R.; Klemm, H.A.; Battye, W.

    1979-04-01

    The report gives results of a series of sensitivity tests of a GCA fabric filter model, as a precursor to further laboratory and/or field tests. Preliminary tests had shown good agreement with field data. However, the apparent agreement between predicted and actual values was based on limited comparisons: validation was carried out without regard to optimization of the data inputs selected by the filter users or manufactures. The sensitivity tests involved introducing into the model several hypothetical data inputs that reflect the expected ranges in the principal filter system variables. Such factors as air/cloth ratio, cleaning frequency, amount of cleaning,more » specific resistence coefficient K2, the number of compartments, and inlet concentration were examined in various permutations. A key objective of the tests was to determine the variables that require the greatest accuracy in estimation based on their overall impact on model output. For K2 variations, the system resistance and emission properties showed little change; but the cleaning requirement changed drastically. On the other hand, considerable difference in outlet dust concentration was indicated when the degree of fabric cleaning was varied. To make the findings more useful to persons assessing the probable success of proposed or existing filter systems, much of the data output is presented in graphs or charts.« less

  15. Heuristic extraction of rules in pruned artificial neural networks models used for quantifying highly overlapping chromatographic peaks.

    PubMed

    Hervás, César; Silva, Manuel; Serrano, Juan Manuel; Orejuela, Eva

    2004-01-01

    The suitability of an approach for extracting heuristic rules from trained artificial neural networks (ANNs) pruned by a regularization method and with architectures designed by evolutionary computation for quantifying highly overlapping chromatographic peaks is demonstrated. The ANN input data are estimated by the Levenberg-Marquardt method in the form of a four-parameter Weibull curve associated with the profile of the chromatographic band. To test this approach, two N-methylcarbamate pesticides, carbofuran and propoxur, were quantified using a classic peroxyoxalate chemiluminescence reaction as a detection system for chromatographic analysis. Straightforward network topologies (one and two outputs models) allow the analytes to be quantified in concentration ratios ranging from 1:7 to 5:1 with an average standard error of prediction for the generalization test of 2.7 and 2.3% for carbofuran and propoxur, respectively. The reduced dimensions of the selected ANN architectures, especially those obtained after using heuristic rules, allowed simple quantification equations to be developed that transform the input variables into output variables. These equations can be easily interpreted from a chemical point of view to attain quantitative analytical information regarding the effect of both analytes on the characteristics of chromatographic bands, namely profile, dispersion, peak height, and residence time. Copyright 2004 American Chemical Society

  16. The Role of Heart-Rate Variability Parameters in Activity Recognition and Energy-Expenditure Estimation Using Wearable Sensors.

    PubMed

    Park, Heesu; Dong, Suh-Yeon; Lee, Miran; Youn, Inchan

    2017-07-24

    Human-activity recognition (HAR) and energy-expenditure (EE) estimation are major functions in the mobile healthcare system. Both functions have been investigated for a long time; however, several challenges remain unsolved, such as the confusion between activities and the recognition of energy-consuming activities involving little or no movement. To solve these problems, we propose a novel approach using an accelerometer and electrocardiogram (ECG). First, we collected a database of six activities (sitting, standing, walking, ascending, resting and running) of 13 voluntary participants. We compared the HAR performances of three models with respect to the input data type (with none, all, or some of the heart-rate variability (HRV) parameters). The best recognition performance was 96.35%, which was obtained with some selected HRV parameters. EE was also estimated for different choices of the input data type (with or without HRV parameters) and the model type (single and activity-specific). The best estimation performance was found in the case of the activity-specific model with HRV parameters. Our findings indicate that the use of human physiological data, obtained by wearable sensors, has a significant impact on both HAR and EE estimation, which are crucial functions in the mobile healthcare system.

  17. Exploring the full natural variability of eruption sizes within probabilistic hazard assessment of tephra dispersal

    NASA Astrophysics Data System (ADS)

    Selva, Jacopo; Sandri, Laura; Costa, Antonio; Tonini, Roberto; Folch, Arnau; Macedonio, Giovanni

    2014-05-01

    The intrinsic uncertainty and variability associated to the size of next eruption strongly affects short to long-term tephra hazard assessment. Often, emergency plans are established accounting for the effects of one or a few representative scenarios (meant as a specific combination of eruptive size and vent position), selected with subjective criteria. On the other hand, probabilistic hazard assessments (PHA) consistently explore the natural variability of such scenarios. PHA for tephra dispersal needs the definition of eruptive scenarios (usually by grouping possible eruption sizes and vent positions in classes) with associated probabilities, a meteorological dataset covering a representative time period, and a tephra dispersal model. PHA results from combining simulations considering different volcanological and meteorological conditions through a weight given by their specific probability of occurrence. However, volcanological parameters, such as erupted mass, eruption column height and duration, bulk granulometry, fraction of aggregates, typically encompass a wide range of values. Because of such a variability, single representative scenarios or size classes cannot be adequately defined using single values for the volcanological inputs. Here we propose a method that accounts for this within-size-class variability in the framework of Event Trees. The variability of each parameter is modeled with specific Probability Density Functions, and meteorological and volcanological inputs are chosen by using a stratified sampling method. This procedure allows avoiding the bias introduced by selecting single representative scenarios and thus neglecting most of the intrinsic eruptive variability. When considering within-size-class variability, attention must be paid to appropriately weight events falling within the same size class. While a uniform weight to all the events belonging to a size class is the most straightforward idea, this implies a strong dependence on the thresholds dividing classes: under this choice, the largest event of a size class has a much larger weight than the smallest event of the subsequent size class. In order to overcome this problem, in this study, we propose an innovative solution able to smoothly link the weight variability within each size class to the variability among the size classes through a common power law, and, simultaneously, respect the probability of different size classes conditional to the occurrence of an eruption. Embedding this procedure into the Bayesian Event Tree scheme enables for tephra fall PHA, quantified through hazard curves and maps representing readable results applicable in planning risk mitigation actions, and for the quantification of its epistemic uncertainties. As examples, we analyze long-term tephra fall PHA at Vesuvius and Campi Flegrei. We integrate two tephra dispersal models (the analytical HAZMAP and the numerical FALL3D) into BET_VH. The ECMWF reanalysis dataset are used for exploring different meteorological conditions. The results obtained clearly show that PHA accounting for the whole natural variability significantly differs from that based on a representative scenarios, as in volcanic hazard common practice.

  18. Inverter ratio failure detector

    NASA Technical Reports Server (NTRS)

    Wagner, A. P.; Ebersole, T. J.; Andrews, R. E. (Inventor)

    1974-01-01

    A failure detector which detects the failure of a dc to ac inverter is disclosed. The inverter under failureless conditions is characterized by a known linear relationship of its input and output voltages and by a known linear relationship of its input and output currents. The detector includes circuitry which is responsive to the detector's input and output voltages and which provides a failure-indicating signal only when the monitored output voltage is less by a selected factor, than the expected output voltage for the monitored input voltage, based on the known voltages' relationship. Similarly, the detector includes circuitry which is responsive to the input and output currents and provides a failure-indicating signal only when the input current exceeds by a selected factor the expected input current for the monitored output current based on the known currents' relationship.

  19. Decision Support System for Evaluation of Gunnison River Flow Regimes With Respect To Resources of the Black Canyon of the Gunnison National Park

    USGS Publications Warehouse

    Auble, Gregor T.; Wondzell, Mark; Talbert, Colin

    2009-01-01

    This report describes and documents a decision support system for the Gunnison River in Black Canyon of the Gunnison National Park. It is a macro-embedded EXCEL program that calculates and displays indicators representing valued characteristics or processes in the Black Canyon based on daily flows of the Gunnison River. The program is designed to easily accept input from downloaded stream gage records or output from the RIVERWARE reservoir operations model being used for the upstream Aspinall Unit. The decision support system is structured to compare as many as eight alternative flow regimes, where each alternative is represented by a daily sequence of at least 20 calendar years of streamflow. Indicators include selected flow statistics, riparian plant community distribution, clearing of box elder by inundation and scour, several measures of sediment mobilization, trout fry habitat, and federal reserved water rights. Calculation of variables representing National Park Service federal reserved water rights requires additional secondary input files pertaining to forecast and actual basin inflows and storage levels in Blue Mesa reservoir. Example input files representing a range of situations including historical, reconstructed natural, and simulated alternative reservoir operations are provided with the software.

  20. A novel Gaussian process regression model for state-of-health estimation of lithium-ion battery using charging curve

    NASA Astrophysics Data System (ADS)

    Yang, Duo; Zhang, Xu; Pan, Rui; Wang, Yujie; Chen, Zonghai

    2018-04-01

    The state-of-health (SOH) estimation is always a crucial issue for lithium-ion batteries. In order to provide an accurate and reliable SOH estimation, a novel Gaussian process regression (GPR) model based on charging curve is proposed in this paper. Different from other researches where SOH is commonly estimated by cycle life, in this work four specific parameters extracted from charging curves are used as inputs of the GPR model instead of cycle numbers. These parameters can reflect the battery aging phenomenon from different angles. The grey relational analysis method is applied to analyze the relational grade between selected features and SOH. On the other hand, some adjustments are made in the proposed GPR model. Covariance function design and the similarity measurement of input variables are modified so as to improve the SOH estimate accuracy and adapt to the case of multidimensional input. Several aging data from NASA data repository are used for demonstrating the estimation effect by the proposed method. Results show that the proposed method has high SOH estimation accuracy. Besides, a battery with dynamic discharging profile is used to verify the robustness and reliability of this method.

  1. Wave-plate structures, power selective optical filter devices, and optical systems using same

    DOEpatents

    Koplow, Jeffrey P [San Ramon, CA

    2012-07-03

    In an embodiment, an optical filter device includes an input polarizer for selectively transmitting an input signal. The device includes a wave-plate structure positioned to receive the input signal, which includes first and second substantially zero-order, zero-wave plates arranged in series with and oriented at an angle relative to each other. The first and second zero-wave plates are configured to alter a polarization state of the input signal passing in a manner that depends on the power of the input signal. Each zero-wave plate includes an entry and exit wave plate each having a fast axis, with the fast axes oriented substantially perpendicular to each other. Each entry wave plate is oriented relative to a transmission axis of the input polarizer at a respective angle. An output polarizer is positioned to receive a signal output from the wave-plate structure and selectively transmits the signal based on the polarization state.

  2. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  3. Revealing unobserved factors underlying cortical activity with a rectified latent variable model applied to neural population recordings.

    PubMed

    Whiteway, Matthew R; Butts, Daniel A

    2017-03-01

    The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.

  4. Not All Children Agree: Acquisition of Agreement when the Input Is Variable

    ERIC Educational Resources Information Center

    Miller, Karen

    2012-01-01

    In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…

  5. Examining variation in treatment costs: a cost function for outpatient methadone treatment programs.

    PubMed

    Dunlap, Laura J; Zarkin, Gary A; Cowell, Alexander J

    2008-06-01

    To estimate a hybrid cost function of the relationship between total annual cost for outpatient methadone treatment and output (annual patient days and selected services), input prices (wages and building space costs), and selected program and patient case-mix characteristics. Data are from a multistate study of 159 methadone treatment programs that participated in the Center for Substance Abuse Treatment's Evaluation of the Methadone/LAAM Treatment Program Accreditation Project between 1998 and 2000. Using least squares regression for weighted data, we estimate the relationship between total annual costs and selected output measures, wages, building space costs, and selected program and patient case-mix characteristics. Findings indicate that total annual cost is positively associated with program's annual patient days, with a 10 percent increase in patient days associated with an 8.2 percent increase in total cost. Total annual cost also increases with counselor wages (p<.01), but no significant association is found for nurse wages or monthly building costs. Surprisingly, program characteristics and patient case mix variables do not appear to explain variations in methadone treatment costs. Similar results are found for a model with services as outputs. This study provides important new insights into the determinants of methadone treatment costs. Our findings concur with economic theory in that total annual cost is positively related to counselor wages. However, among our factor inputs, counselor wages are the only significant driver of these costs. Furthermore, our findings suggest that methadone programs may realize economies of scale; however, other important factors, such as patient access, should be considered.

  6. [Different wavelengths selection methods for identification of early blight on tomato leaves by using hyperspectral imaging technique].

    PubMed

    Cheng, Shu-Xi; Xie, Chuan-Qi; Wang, Qiao-Nan; He, Yong; Shao, Yong-Ni

    2014-05-01

    Identification of early blight on tomato leaves by using hyperspectral imaging technique based on different effective wavelengths selection methods (successive projections algorithm, SPA; x-loading weights, x-LW; gram-schmidt orthogonaliza-tion, GSO) was studied in the present paper. Hyperspectral images of seventy healthy and seventy infected tomato leaves were obtained by hyperspectral imaging system across the wavelength range of 380-1023 nm. Reflectance of all pixels in region of interest (ROI) was extracted by ENVI 4. 7 software. Least squares-support vector machine (LS-SVM) model was established based on the full spectral wavelengths. It obtained an excellent result with the highest identification accuracy (100%) in both calibration and prediction sets. Then, EW-LS-SVM and EW-LDA models were established based on the selected wavelengths suggested by SPA, x-LW and GSO, respectively. The results showed that all of the EW-LS-SVM and EW-LDA models performed well with the identification accuracy of 100% in EW-LS-SVM model and 100%, 100% and 97. 83% in EW-LDA model, respectively. Moreover, the number of input wavelengths of SPA-LS-SVM, x-LW-LS-SVM and GSO-LS-SVM models were four (492, 550, 633 and 680 nm), three (631, 719 and 747 nm) and two (533 and 657 nm), respectively. Fewer input variables were beneficial for the development of identification instrument. It demonstrated that it is feasible to identify early blight on tomato leaves by using hyperspectral imaging, and SPA, x-LW and GSO were effective wavelengths selection methods.

  7. Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool.

    PubMed

    Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert

    2016-08-01

    This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes 'Dos' and 'Don'ts' for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a 'safe' RPE selection, while 98% 'safe' outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. © Crown copyright 2016.

  8. Evaluation of RPE-Select: A Web-Based Respiratory Protective Equipment Selector Tool

    PubMed Central

    Vaughan, Nick; Rajan-Sithamparanadarajah, Bob; Atkinson, Robert

    2016-01-01

    This article describes the evaluation of an open-access web-based respiratory protective equipment selector tool (RPE-Select, accessible at http://www.healthyworkinglives.com/rpe-selector). This tool is based on the principles of the COSHH-Essentials (C-E) control banding (CB) tool, which was developed for the exposure risk management of hazardous chemicals in the workplace by small and medium sized enterprises (SMEs) and general practice H&S professionals. RPE-Select can be used for identifying adequate and suitable RPE for dusts, fibres, mist (solvent, water, and oil based), sprays, volatile solids, fumes, gases, vapours, and actual or potential oxygen deficiency. It can be applied for substances and products with safety data sheets as well as for a large number of commonly encountered process-generated substances (PGS), such as poultry house dusts or welding fume. Potential international usability has been built-in by using the Hazard Statements developed for the Globally Harmonised System (GHS) and providing recommended RPE in picture form as well as with a written specification. Illustration helps to compensate for the variabilities in assigned protection factors across the world. RPE-Select uses easily understandable descriptions/explanations and an interactive stepwise flow for providing input/answers at each step. The output of the selection process is a report summarising the user input data and a selection of RPE, including types of filters where applicable, from which the user can select the appropriate one for each wearer. In addition, each report includes ‘Dos’ and ‘Don’ts’ for the recommended RPE. RPE-Select outcomes, based on up to 20 hypothetical use scenarios, were evaluated in comparison with other available RPE selection processes and tools, and by 32 independent users with a broad range of familiarities with industrial use scenarios in general and respiratory protection in particular. For scenarios involving substances having safety data sheets, 87% of RPE-Select outcomes resulted in a ‘safe’ RPE selection, while 98% ‘safe’ outcomes were achieved for scenarios involving process-generated substances. Reasons for the outliers were examined. User comments and opinions on the mechanics and usability of RPE-Select are also presented. PMID:27286763

  9. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  10. Water and nitrogen management effects on semiarid sorghum production and soil trace gas flux under future climate.

    PubMed

    Duval, Benjamin D; Ghimire, Rajan; Hartman, Melannie D; Marsalis, Mark A

    2018-01-01

    External inputs to agricultural systems can overcome latent soil and climate constraints on production, while contributing to greenhouse gas emissions from fertilizer and water management inefficiencies. Proper crop selection for a given region can lessen the need for irrigation and timing of N fertilizer application with crop N demand can potentially reduce N2O emissions and increase N use efficiency while reducing residual soil N and N leaching. However, increased variability in precipitation is an expectation of climate change and makes predicting biomass and gas flux responses to management more challenging. We used the DayCent model to test hypotheses about input intensity controls on sorghum (Sorghum bicolor (L.) Moench) productivity and greenhouse gas emissions in the southwestern United States under future climate. Sorghum had been previously parameterized for DayCent, but an inverse-modeling via parameter estimation method significantly improved model validation to field data. Aboveground production and N2O flux were more responsive to N additions than irrigation, but simulations with future climate produced lower values for sorghum than current climate. We found positive interactions between irrigation at increased N application for N2O and CO2 fluxes. Extremes in sorghum production under future climate were a function of biomass accumulation trajectories related to daily soil water and mineral N. Root C inputs correlated with soil organic C pools, but overall soil C declined at the decadal scale under current weather while modest gains were simulated under future weather. Scaling biomass and N2O fluxes by unit N and water input revealed that sorghum can be productive without irrigation, and the effect of irrigating crops is difficult to forecast when precipitation is variable within the growing season. These simulation results demonstrate the importance of understanding sorghum production and greenhouse gas emissions at daily scales when assessing annual and decadal-scale management decisions' effects on aspects of arid and semiarid agroecosystem biogeochemistry.

  11. Analysis and selection of optimal function implementations in massively parallel computer

    DOEpatents

    Archer, Charles Jens [Rochester, MN; Peters, Amanda [Rochester, MN; Ratterman, Joseph D [Rochester, MN

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  12. Input Subject Diversity Accelerates the Growth of Tense and Agreement: Indirect Benefits From a Parent-Implemented Intervention

    PubMed Central

    Rispoli, Matthew; Holt, Janet K.

    2017-01-01

    Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819

  13. Uncertainty analysis of the simulations of effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota

    USGS Publications Warehouse

    Wesolowski, Edwin A.

    1996-01-01

    Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.

  14. A general procedure to generate models for urban environmental-noise pollution using feature selection and machine learning methods.

    PubMed

    Torija, Antonio J; Ruiz, Diego P

    2015-02-01

    The prediction of environmental noise in urban environments requires the solution of a complex and non-linear problem, since there are complex relationships among the multitude of variables involved in the characterization and modelling of environmental noise and environmental-noise magnitudes. Moreover, the inclusion of the great spatial heterogeneity characteristic of urban environments seems to be essential in order to achieve an accurate environmental-noise prediction in cities. This problem is addressed in this paper, where a procedure based on feature-selection techniques and machine-learning regression methods is proposed and applied to this environmental problem. Three machine-learning regression methods, which are considered very robust in solving non-linear problems, are used to estimate the energy-equivalent sound-pressure level descriptor (LAeq). These three methods are: (i) multilayer perceptron (MLP), (ii) sequential minimal optimisation (SMO), and (iii) Gaussian processes for regression (GPR). In addition, because of the high number of input variables involved in environmental-noise modelling and estimation in urban environments, which make LAeq prediction models quite complex and costly in terms of time and resources for application to real situations, three different techniques are used to approach feature selection or data reduction. The feature-selection techniques used are: (i) correlation-based feature-subset selection (CFS), (ii) wrapper for feature-subset selection (WFS), and the data reduction technique is principal-component analysis (PCA). The subsequent analysis leads to a proposal of different schemes, depending on the needs regarding data collection and accuracy. The use of WFS as the feature-selection technique with the implementation of SMO or GPR as regression algorithm provides the best LAeq estimation (R(2)=0.94 and mean absolute error (MAE)=1.14-1.16 dB(A)). Copyright © 2014 Elsevier B.V. All rights reserved.

  15. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  16. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  17. Guidance for Selecting Input Parameters in Modeling the Environmental Fate and Transport of Pesticides

    EPA Pesticide Factsheets

    Guidance to select and prepare input values for OPP's aquatic exposure models. Intended to improve the consistency in modeling the fate of pesticides in the environment and quality of OPP's aquatic risk assessments.

  18. Statistical Analysis of 30 Years Rainfall Data: A Case Study

    NASA Astrophysics Data System (ADS)

    Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.

    2017-07-01

    Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.

  19. Phosphorus component in AnnAGNPS

    USGS Publications Warehouse

    Yuan, Y.; Bingner, R.L.; Theurer, F.D.; Rebich, R.A.; Moore, P.A.

    2005-01-01

    The USDA Annualized Agricultural Non-Point Source Pollution model (AnnAGNPS) has been developed to aid in evaluation of watershed response to agricultural management practices. Previous studies have demonstrated the capability of the model to simulate runoff and sediment, but not phosphorus (P). The main purpose of this article is to evaluate the performance of AnnAGNPS on P simulation using comparisons with measurements from the Deep Hollow watershed of the Mississippi Delta Management Systems Evaluation Area (MDMSEA) project. A sensitivity analysis was performed to identify input parameters whose impact is the greatest on P yields. Sensitivity analysis results indicate that the most sensitive variables of those selected are initial soil P contents, P application rate, and plant P uptake. AnnAGNPS simulations of dissolved P yield do not agree well with observed dissolved P yield (Nash-Sutcliffe coefficient of efficiency of 0.34, R2 of 0.51, and slope of 0.24); however, AnnAGNPS simulations of total P yield agree well with observed total P yield (Nash-Sutcliffe coefficient of efficiency of 0.85, R2 of 0.88, and slope of 0.83). The difference in dissolved P yield may be attributed to limitations in model simulation of P processes. Uncertainties in input parameter selections also affect the model's performance.

  20. The Effect of Vocabulary Self-Selection Strategy and Input Enhancement Strategy on the Vocabulary Knowledge of Iranian EFL Learners

    ERIC Educational Resources Information Center

    Masoudi, Golfam

    2017-01-01

    The present study was designed to investigate empirically the effect of Vocabulary Self-Selection strategy and Input Enhancement strategy on the vocabulary knowledge of Iranian EFL Learners. After taking a diagnostic pretest, both experimental groups enrolled in two classes. Learners who practiced Vocabulary Self-Selection were allowed to…

  1. Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program

    NASA Technical Reports Server (NTRS)

    Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.

    1981-01-01

    The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.

  2. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  3. Multiplexer and time duration measuring circuit

    DOEpatents

    Gray, Jr., James

    1980-01-01

    A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.

  4. Monthly streamflow forecasting using continuous wavelet and multi-gene genetic programming combination

    NASA Astrophysics Data System (ADS)

    Hadi, Sinan Jasim; Tombul, Mustafa

    2018-06-01

    Streamflow is an essential component of the hydrologic cycle in the regional and global scale and the main source of fresh water supply. It is highly associated with natural disasters, such as droughts and floods. Therefore, accurate streamflow forecasting is essential. Forecasting streamflow in general and monthly streamflow in particular is a complex process that cannot be handled by data-driven models (DDMs) only and requires pre-processing. Wavelet transformation is a pre-processing technique; however, application of continuous wavelet transformation (CWT) produces many scales that cause deterioration in the performance of any DDM because of the high number of redundant variables. This study proposes multigene genetic programming (MGGP) as a selection tool. After the CWT analysis, it selects important scales to be imposed into the artificial neural network (ANN). A basin located in the southeast of Turkey is selected as case study to prove the forecasting ability of the proposed model. One month ahead downstream flow is used as output, and downstream flow, upstream, rainfall, temperature, and potential evapotranspiration with associated lags are used as inputs. Before modeling, wavelet coherence transformation (WCT) analysis was conducted to analyze the relationship between variables in the time-frequency domain. Several combinations were developed to investigate the effect of the variables on streamflow forecasting. The results indicated a high localized correlation between the streamflow and other variables, especially the upstream. In the models of the standalone layout where the data were entered to ANN and MGGP without CWT, the performance is found poor. In the best-scale layout, where the best scale of the CWT identified as the highest correlated scale is chosen and enters to ANN and MGGP, the performance increased slightly. Using the proposed model, the performance improved dramatically particularly in forecasting the peak values because of the inclusion of several scales in which seasonality and irregularity can be captured. Using hydrological and meteorological variables also improved the ability to forecast the streamflow.

  5. Generation of Multivariate Surface Weather Series with Use of the Stochastic Weather Generator Linked to Regional Climate Model

    NASA Astrophysics Data System (ADS)

    Dubrovsky, M.; Farda, A.; Huth, R.

    2012-12-01

    The regional-scale simulations of weather-sensitive processes (e.g. hydrology, agriculture and forestry) for the present and/or future climate often require high resolution meteorological inputs in terms of the time series of selected surface weather characteristics (typically temperature, precipitation, solar radiation, humidity, wind) for a set of stations or on a regular grid. As even the latest Global and Regional Climate Models (GCMs and RCMs) do not provide realistic representation of statistical structure of the surface weather, the model outputs must be postprocessed (downscaled) to achieve the desired statistical structure of the weather data before being used as an input to the follow-up simulation models. One of the downscaling approaches, which is employed also here, is based on a weather generator (WG), which is calibrated using the observed weather series and then modified (in case of simulations for the future climate) according to the GCM- or RCM-based climate change scenarios. The present contribution uses the parametric daily weather generator M&Rfi to follow two aims: (1) Validation of the new simulations of the present climate (1961-1990) made by the ALADIN-Climate/CZ (v.2) Regional Climate Model at 25 km resolution. The WG parameters will be derived from the RCM-simulated surface weather series and compared to those derived from observational data in the Czech meteorological stations. The set of WG parameters will include selected statistics of the surface temperature and precipitation (characteristics of the mean, variability, interdiurnal variability and extremes). (2) Testing a potential of RCM output for calibration of the WG for the ungauged locations. The methodology being examined will consist in using the WG, whose parameters are interpolated from the surrounding stations and then corrected based on a RCM-simulated spatial variability. The quality of the weather series produced by the WG calibrated in this way will be assessed in terms of selected climatic characteristics focusing on extreme precipitation and temperature characteristics (including characteristics of dry/wet/hot/cold spells). Acknowledgements: The present experiment is made within the frame of projects ALARO (project P209/11/2405 sponsored by the Czech Science Foundation), WG4VALUE (project LD12029 sponsored by the Ministry of Education, Youth and Sports) and VALUE (COST ES 1102 action).

  6. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  7. Prediction of problematic wine fermentations using artificial neural networks.

    PubMed

    Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra

    2011-11-01

    Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.

  8. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    NASA Astrophysics Data System (ADS)

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-04-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.

  9. Etching Characteristics of VO2 Thin Films Using Inductively Coupled Cl2/Ar Plasma

    NASA Astrophysics Data System (ADS)

    Ham, Yong-Hyun; Efremov, Alexander; Min, Nam-Ki; Lee, Hyun Woo; Yun, Sun Jin; Kwon, Kwang-Ho

    2009-08-01

    A study on both etching characteristics and mechanism of VO2 thin films in the Cl2/Ar inductively coupled plasma was carried. The variable parameters were gas pressure (4-10 mTorr) and input power (400-700 W) at fixed bias power of 150 W and initial mixture composition of 25% Cl2 + 75% Ar. It was found that an increase in both gas pressure and input power results in increasing VO2 etch rate while the etch selectivity over photoresist keeps a near to constant values. Plasma diagnostics by Langmuir probes and zero-dimensional plasma model provided the data on plasma parameters, steady-state densities and fluxes of active species on the etched surface. The model-based analysis of the etch mechanism showed that, for the given ranges of operating conditions, the VO2 etch kinetics corresponds to the transitional regime of ion-assisted chemical reaction and is influenced by both neutral and ion fluxes with a higher sensitivity to the neutral flux.

  10. Two-Stage Variable Sample-Rate Conversion System

    NASA Technical Reports Server (NTRS)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  11. Soil organic carbon dynamics jointly controlled by climate, carbon inputs, soil properties and soil carbon fractions.

    PubMed

    Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli

    2017-10-01

    Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1  yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.

  12. The intralaminar thalamus—an expressway linking visual stimuli to circuits determining agency and action selection

    PubMed Central

    Fisher, Simon D.; Reynolds, John N. J.

    2014-01-01

    Anatomical investigations have revealed connections between the intralaminar thalamic nuclei and areas such as the superior colliculus (SC) that receive short latency input from visual and auditory primary sensory areas. The intralaminar nuclei in turn project to the major input nucleus of the basal ganglia, the striatum, providing this nucleus with a source of subcortical excitatory input. Together with a converging input from the cerebral cortex, and a neuromodulatory dopaminergic input from the midbrain, the components previously found necessary for reinforcement learning in the basal ganglia are present. With this intralaminar sensory input, the basal ganglia are thought to play a primary role in determining what aspect of an organism’s own behavior has caused salient environmental changes. Additionally, subcortical loops through thalamic and basal ganglia nuclei are proposed to play a critical role in action selection. In this mini review we will consider the anatomical and physiological evidence underlying the existence of these circuits. We will propose how the circuits interact to modulate basal ganglia output and solve common behavioral learning problems of agency determination and action selection. PMID:24765070

  13. Scheduling the blended solution as industrial CO2 absorber in separation process by back-propagation artificial neural networks.

    PubMed

    Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah

    2015-11-05

    It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Interactive FORTRAN IV computer programs for the thermodynamic and transport properties of selected cryogens (fluids pack)

    NASA Technical Reports Server (NTRS)

    Mccarty, R. D.

    1980-01-01

    The thermodynamic and transport properties of selected cryogens had programmed into a series of computer routines. Input variables are any two of P, rho or T in the single phase regions and either P or T for the saturated liquid or vapor state. The output is pressure, density, temperature, entropy, enthalpy for all of the fluids and in most cases specific heat capacity and speed of sound. Viscosity and thermal conductivity are also given for most of the fluids. The programs are designed for access by remote terminal; however, they have been written in a modular form to allow the user to select either specific fluids or specific properties for particular needs. The program includes properties for hydrogen, helium, neon, nitrogen, oxygen, argon, and methane. The programs include properties for gaseous and liquid states usually from the triple point to some upper limit of pressure and temperature which varies from fluid to fluid.

  15. Real time selective harmonic minimization for multilevel inverters using genetic algorithm and artifical neural network angle generation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Filho, Faete J; Tolbert, Leon M; Ozpineci, Burak

    2012-01-01

    The work developed here proposes a methodology for calculating switching angles for varying DC sources in a multilevel cascaded H-bridges converter. In this approach the required fundamental is achieved, the lower harmonics are minimized, and the system can be implemented in real time with low memory requirements. Genetic algorithm (GA) is the stochastic search method to find the solution for the set of equations where the input voltages are the known variables and the switching angles are the unknown variables. With the dataset generated by GA, an artificial neural network (ANN) is trained to store the solutions without excessive memorymore » storage requirements. This trained ANN then senses the voltage of each cell and produces the switching angles in order to regulate the fundamental at 120 V and eliminate or minimize the low order harmonics while operating in real time.« less

  16. Multistage variable probability forest volume inventory. [Defiance Unit of the Navajo Nation in Arizona and Colorado

    NASA Technical Reports Server (NTRS)

    Anderson, J. E. (Principal Investigator)

    1979-01-01

    The net board foot volume (Scribner log rule) of the standing Ponderosa pine timber on the Defiance Unit of the Navajo Nation's forested land was estimated using a multistage forest volume inventory scheme with variable sample selection probabilities. The inventory designed to accomplish this task required that both LANDSAT MSS digital data and aircraft acquired data be used to locate one acre ground splits, which were subsequently visited by ground teams conducting detailed tree measurements using an optical dendrometer. The dendrometer measurements were then punched on computer input cards and were entered in a computer program developed by the U.S. Forest Service. The resulting individual tree volume estimates were then expanded through the use of a statistically defined equation to produce the volume estimate for the entire area which includes 192,026 acres and is approximately a 44% the total forested area of the Navajo Nation.

  17. An Optimization Framework for Dynamic Hybrid Energy Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis

    A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problemmore » takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.« less

  18. Prediction of near-surface soil moisture at large scale by digital terrain modeling and neural networks.

    PubMed

    Lavado Contador, J F; Maneta, M; Schnabel, S

    2006-10-01

    The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.

  19. Parameterization of typhoon-induced ocean cooling using temperature equation and machine learning algorithms: an example of typhoon Soulik (2013)

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Jiang, Guo-Qing; Liu, Xin

    2017-09-01

    This study proposed three algorithms that can potentially be used to provide sea surface temperature (SST) conditions for typhoon prediction models. Different from traditional data assimilation approaches, which provide prescribed initial/boundary conditions, our proposed algorithms aim to resolve a flow-dependent SST feedback between growing typhoons and oceans in the future time. Two of these algorithms are based on linear temperature equations (TE-based), and the other is based on an innovative technique involving machine learning (ML-based). The algorithms are then implemented into a Weather Research and Forecasting model for the simulation of typhoon to assess their effectiveness, and the results show significant improvement in simulated storm intensities by including ocean cooling feedback. The TE-based algorithm I considers wind-induced ocean vertical mixing and upwelling processes only, and thus obtained a synoptic and relatively smooth sea surface temperature cooling. The TE-based algorithm II incorporates not only typhoon winds but also ocean information, and thus resolves more cooling features. The ML-based algorithm is based on a neural network, consisting of multiple layers of input variables and neurons, and produces the best estimate of the cooling structure, in terms of its amplitude and position. Sensitivity analysis indicated that the typhoon-induced ocean cooling is a nonlinear process involving interactions of multiple atmospheric and oceanic variables. Therefore, with an appropriate selection of input variables and neuron sizes, the ML-based algorithm appears to be more efficient in prognosing the typhoon-induced ocean cooling and in predicting typhoon intensity than those algorithms based on linear regression methods.

  20. Stylus/tablet user input device for MRI heart wall segmentation: efficiency and ease of use.

    PubMed

    Taslakian, Bedros; Pires, Antonio; Halpern, Dan; Babb, James S; Axel, Leon

    2018-05-02

    To determine whether use of a stylus user input device (UID) would be superior to a mouse for CMR segmentation. Twenty-five consecutive clinical cardiac magnetic resonance (CMR) examinations were selected. Image analysis was independently performed by four observers. Manual tracing of left (LV) and right (RV) ventricular endocardial contours was performed twice in 10 randomly assigned sessions, each session using only one UID. Segmentation time and the ventricular function variables were recorded. The mean segmentation time and time reduction were calculated for each method. Intraclass correlation coefficients (ICC) and Bland-Altman plots of function variables were used to assess intra- and interobserver variability and agreement between methods. Observers completed a Likert-type questionnaire. The mean segmentation time (in seconds) was significantly less with the stylus compared to the mouse, averaging 206±108 versus 308±125 (p<0.001) and 225±140 versus 353±162 (p<0.001) for LV and RV segmentation, respectively. The intra- and interobserver agreement rates were excellent (ICC≥0.75) regardless of the UID. There was an excellent agreement between measurements derived from manual segmentation using different UIDs (ICC≥0.75), with few exceptions. Observers preferred the stylus. The study shows a significant reduction in segmentation time using the stylus, a subjective preference, and excellent agreement between the methods. • Using a stylus for MRI ventricular segmentation is faster compared to mouse • A stylus is easier to use and results in less fatigue • There is excellent agreement between stylus and mouse UIDs.

  1. Development of Tier 1 screening tool for soil and groundwater vulnerability assessment in Korea using classification algorithm in a neural network

    NASA Astrophysics Data System (ADS)

    Shin, K. H.; Kim, K. H.; Ki, S. J.; Lee, H. G.

    2017-12-01

    The vulnerability assessment tool at a Tier 1 level, although not often used for regulatory purposes, helps establish pollution prevention and management strategies in the areas of potential environmental concern such as soil and ground water. In this study, the Neural Network Pattern Recognition Tool embedded in MATLAB was used to allow the initial screening of soil and groundwater pollution based on data compiled across about 1000 previously contaminated sites in Korea. The input variables included a series of parameters which were tightly related to downward movement of water and contaminants through soil and ground water, whereas multiple classes were assigned to the sum of concentrations of major pollutants detected. Results showed that in accordance with diverse pollution indices for soil and ground water, pollution levels in both media were strongly modulated by site-specific characteristics such as intrinsic soil and other geologic properties, in addition to pollution sources and rainfall. However, classification accuracy was very sensitive to the number of classes defined as well as the types of the variables incorporated, requiring careful selection of input variables and output categories. Therefore, we believe that the proposed methodology is used not only to modify existing pollution indices so that they are more suitable for addressing local vulnerability, but also to develop a unique assessment tool to support decision making based on locally or nationally available data. This study was funded by a grant from the GAIA project(2016000560002), Korea Environmental Industry & Technology Institute, Republic of Korea.

  2. Snowmelt Runoff Model in Japan

    NASA Technical Reports Server (NTRS)

    Ishihara, K.; Nishimura, Y.; Takeda, K.

    1985-01-01

    The preliminary Japanese snowmelt runoff model was modified so that all the input variables arc of the antecedent days and the inflow of the previous day is taken into account. A few LANDSAT images obtained in the past were effectively used to verify and modify the depletion curve induced from the snow water equivalent distribution at maximum stage and the accumulated degree days at one representative point selected in the basin. Together with the depletion curve, the relationship between the basin ide daily snowmelt amount and the air temperature at the point above are exhibited homograph form for the convenience of the model user. The runoff forecasting procedure is summarized.

  3. Innovations in Basic Flight Training for the Indonesian Air Force

    DTIC Science & Technology

    1990-12-01

    microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook

  4. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  5. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  6. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  7. Probabilistic accident consequence uncertainty analysis: Dispersion and deposition uncertainty assessment, appendices A and B

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harper, F.T.; Young, M.L.; Miller, L.A.

    The development of two new probabilistic accident consequence codes, MACCS and COSYMA, completed in 1990, estimate the risks presented by nuclear installations based on postulated frequencies and magnitudes of potential accidents. In 1991, the US Nuclear Regulatory Commission (NRC) and the Commission of the European Communities (CEC) began a joint uncertainty analysis of the two codes. The objective was to develop credible and traceable uncertainty distributions for the input variables of the codes. Expert elicitation, developed independently, was identified as the best technology available for developing a library of uncertainty distributions for the selected consequence parameters. The study was formulatedmore » jointly and was limited to the current code models and to physical quantities that could be measured in experiments. To validate the distributions generated for the wet deposition input variables, samples were taken from these distributions and propagated through the wet deposition code model along with the Gaussian plume model (GPM) implemented in the MACCS and COSYMA codes. Resulting distributions closely replicated the aggregated elicited wet deposition distributions. Project teams from the NRC and CEC cooperated successfully to develop and implement a unified process for the elaboration of uncertainty distributions on consequence code input parameters. Formal expert judgment elicitation proved valuable for synthesizing the best available information. Distributions on measurable atmospheric dispersion and deposition parameters were successfully elicited from experts involved in the many phenomenological areas of consequence analysis. This volume is the second of a three-volume document describing the project and contains two appendices describing the rationales for the dispersion and deposition data along with short biographies of the 16 experts who participated in the project.« less

  8. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  9. Does linguistic input play the same role in language learning for children with and without early brain injury?

    PubMed

    Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan

    2009-01-01

    Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.

  10. Variable input observer for state estimation of high-rate dynamics

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob

    2017-04-01

    High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.

  11. Spatial variability in acoustic backscatter as an indicator of tissue homogenate production in pulsed cavitational ultrasound therapy.

    PubMed

    Parsons, Jessica E; Cain, Charles A; Fowlkes, J Brian

    2007-03-01

    Spatial variability in acoustic backscatter is investigated as a potential feedback metric for assessment of lesion morphology during cavitation-mediated mechanical tissue disruption ("histotripsy"). A 750-kHz annular array was aligned confocally with a 4.5 MHz passive backscatter receiver during ex vivo insonation of porcine myocardium. Various exposure conditions were used to elicit a range of damage morphologies and backscatter characteristics [pulse duration = 14 micros, pulse repetition frequency (PRF) = 0.07-3.1 kHz, average I(SPPA) = 22-44 kW/cm2]. Variability in backscatter spatial localization was quantified by tracking the lag required to achieve peak correlation between sequential RF A-lines received. Mean spatial variability was observed to be significantly higher when damage morphology consisted of mechanically disrupted tissue homogenate versus mechanically intact coagulation necrosis (2.35 +/- 1.59 mm versus 0.067 +/- 0.054 mm, p < 0.025). Statistics from these variability distributions were used as the basis for selecting a threshold variability level to identify the onset of homogenate formation via an abrupt, sustained increase in spatially dynamic backscatter activity. Specific indices indicative of the state of the homogenization process were quantified as a function of acoustic input conditions. The prevalence of backscatter spatial variability was observed to scale with the amount of homogenate produced for various PRFs and acoustic intensities.

  12. A multi-fidelity analysis selection method using a constrained discrete optimization formulation

    NASA Astrophysics Data System (ADS)

    Stults, Ian C.

    The purpose of this research is to develop a method for selecting the fidelity of contributing analyses in computer simulations. Model uncertainty is a significant component of result validity, yet it is neglected in most conceptual design studies. When it is considered, it is done so in only a limited fashion, and therefore brings the validity of selections made based on these results into question. Neglecting model uncertainty can potentially cause costly redesigns of concepts later in the design process or can even cause program cancellation. Rather than neglecting it, if one were to instead not only realize the model uncertainty in tools being used but also use this information to select the tools for a contributing analysis, studies could be conducted more efficiently and trust in results could be quantified. Methods for performing this are generally not rigorous or traceable, and in many cases the improvement and additional time spent performing enhanced calculations are washed out by less accurate calculations performed downstream. The intent of this research is to resolve this issue by providing a method which will minimize the amount of time spent conducting computer simulations while meeting accuracy and concept resolution requirements for results. In many conceptual design programs, only limited data is available for quantifying model uncertainty. Because of this data sparsity, traditional probabilistic means for quantifying uncertainty should be reconsidered. This research proposes to instead quantify model uncertainty using an evidence theory formulation (also referred to as Dempster-Shafer theory) in lieu of the traditional probabilistic approach. Specific weaknesses in using evidence theory for quantifying model uncertainty are identified and addressed for the purposes of the Fidelity Selection Problem. A series of experiments was conducted to address these weaknesses using n-dimensional optimization test functions. These experiments found that model uncertainty present in analyses with 4 or fewer input variables could be effectively quantified using a strategic distribution creation method; if more than 4 input variables exist, a Frontier Finding Particle Swarm Optimization should instead be used. Once model uncertainty in contributing analysis code choices has been quantified, a selection method is required to determine which of these choices should be used in simulations. Because much of the selection done for engineering problems is driven by the physics of the problem, these are poor candidate problems for testing the true fitness of a candidate selection method. Specifically moderate and high dimensional problems' variability can often be reduced to only a few dimensions and scalability often cannot be easily addressed. For these reasons a simple academic function was created for the uncertainty quantification, and a canonical form of the Fidelity Selection Problem (FSP) was created. Fifteen best- and worst-case scenarios were identified in an effort to challenge the candidate selection methods both with respect to the characteristics of the tradeoff between time cost and model uncertainty and with respect to the stringency of the constraints and problem dimensionality. The results from this experiment show that a Genetic Algorithm (GA) was able to consistently find the correct answer, but under certain circumstances, a discrete form of Particle Swarm Optimization (PSO) was able to find the correct answer more quickly. To better illustrate how the uncertainty quantification and discrete optimization might be conducted for a "real world" problem, an illustrative example was conducted using gas turbine engines.

  13. The minimal amount of starting DNA for Agilent’s hybrid capture-based targeted massively parallel sequencing

    PubMed Central

    Chung, Jongsuk; Son, Dae-Soon; Jeon, Hyo-Jeong; Kim, Kyoung-Mee; Park, Gahee; Ryu, Gyu Ha; Park, Woong-Yang; Park, Donghyun

    2016-01-01

    Targeted capture massively parallel sequencing is increasingly being used in clinical settings, and as costs continue to decline, use of this technology may become routine in health care. However, a limited amount of tissue has often been a challenge in meeting quality requirements. To offer a practical guideline for the minimum amount of input DNA for targeted sequencing, we optimized and evaluated the performance of targeted sequencing depending on the input DNA amount. First, using various amounts of input DNA, we compared commercially available library construction kits and selected Agilent’s SureSelect-XT and KAPA Biosystems’ Hyper Prep kits as the kits most compatible with targeted deep sequencing using Agilent’s SureSelect custom capture. Then, we optimized the adapter ligation conditions of the Hyper Prep kit to improve library construction efficiency and adapted multiplexed hybrid selection to reduce the cost of sequencing. In this study, we systematically evaluated the performance of the optimized protocol depending on the amount of input DNA, ranging from 6.25 to 200 ng, suggesting the minimal input DNA amounts based on coverage depths required for specific applications. PMID:27220682

  14. Optimal input selection for neural machine interfaces predicting multiple non-explicit outputs.

    PubMed

    Krepkovich, Eileen T; Perreault, Eric J

    2008-01-01

    This study implemented a novel algorithm that optimally selects inputs for neural machine interface (NMI) devices intended to control multiple outputs and evaluated its performance on systems lacking explicit output. NMIs often incorporate signals from multiple physiological sources and provide predictions for multidimensional control, leading to multiple-input multiple-output systems. Further, NMIs often are used with subjects who have motor disabilities and thus lack explicit motor outputs. Our algorithm was tested on simulated multiple-input multiple-output systems and on electromyogram and kinematic data collected from healthy subjects performing arm reaches. Effects of output noise in simulated systems indicated that the algorithm could be useful for systems with poor estimates of the output states, as is true for systems lacking explicit motor output. To test efficacy on physiological data, selection was performed using inputs from one subject and outputs from a different subject. Selection was effective for these cases, again indicating that this algorithm will be useful for predictions where there is no motor output, as often is the case for disabled subjects. Further, prediction results generalized for different movement types not used for estimation. These results demonstrate the efficacy of this algorithm for the development of neural machine interfaces.

  15. Robust integral variable structure controller and pulse-width pulse-frequency modulated input shaper design for flexible spacecraft with mismatched uncertainty/disturbance.

    PubMed

    Hu, Qinglei

    2007-10-01

    This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.

  16. A waste characterisation procedure for ADM1 implementation based on degradation kinetics.

    PubMed

    Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F

    2012-09-01

    In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Multi-indication Pharmacotherapeutic Multicriteria Decision Analytic Model for the Comparative Formulary Inclusion of Proton Pump Inhibitors in Qatar.

    PubMed

    Al-Badriyeh, Daoud; Alabbadi, Ibrahim; Fahey, Michael; Al-Khal, Abdullatif; Zaidan, Manal

    2016-05-01

    The formulary inclusion of proton pump inhibitors (PPIs) in the government hospital health services in Qatar is not comparative or restricted. Requests to include a PPI in the formulary are typically accepted if evidence of efficacy and tolerability is presented. There are no literature reports of a PPI scoring model that is based on comparatively weighted multiple indications and no reports of PPI selection in Qatar or the Middle East. This study aims to compare first-line use of the PPIs that exist in Qatar. The economic effect of the study recommendations was also quantified. A comparative, evidence-based multicriteria decision analysis (MCDA) model was constructed to follow the multiple indications and pharmacotherapeutic criteria of PPIs. Literature and an expert panel informed the selection criteria of PPIs. Input from the relevant local clinician population steered the relative weighting of selection criteria. Comparatively scored PPIs, exceeding a defined score threshold, were recommended for selection. Weighted model scores were successfully developed, with 95% CI and 5% margin of error. The model comprised 7 main criteria and 38 subcriteria. Main criteria are indication, dosage frequency, treatment duration, best published evidence, available formulations, drug interactions, and pharmacokinetic and pharmacodynamic properties. Most weight was achieved for the indications selection criteria. Esomeprazole and rabeprazole were suggested as formulary options, followed by lansoprazole for nonformulary use. The estimated effect of the study recommendations was up to a 15.3% reduction in the annual PPI expenditure. Robustness of study conclusions against variabilities in study inputs was confirmed via sensitivity analyses. The implementation of a locally developed PPI-specific comparative MCDA scoring model, which is multiweighted indication and criteria based, into the Qatari formulary selection practices is a successful evidence-based cost-cutting exercise. Esomeprazole and rabeprazole should be the first-line choice from among the PPIs available at the Qatari government hospital health services. Copyright © 2016 Elsevier HS Journals, Inc. All rights reserved.

  18. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems

    PubMed Central

    Indiveri, Giacomo

    2008-01-01

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention. PMID:27873818

  19. Neuromorphic VLSI Models of Selective Attention: From Single Chip Vision Sensors to Multi-chip Systems.

    PubMed

    Indiveri, Giacomo

    2008-09-03

    Biological organisms perform complex selective attention operations continuously and effortlessly. These operations allow them to quickly determine the motor actions to take in response to combinations of external stimuli and internal states, and to pay attention to subsets of sensory inputs suppressing non salient ones. Selective attention strategies are extremely effective in both natural and artificial systems which have to cope with large amounts of input data and have limited computational resources. One of the main computational primitives used to perform these selection operations is the Winner-Take-All (WTA) network. These types of networks are formed by arrays of coupled computational nodes that selectively amplify the strongest input signals, and suppress the weaker ones. Neuromorphic circuits are an optimal medium for constructing WTA networks and for implementing efficient hardware models of selective attention systems. In this paper we present an overview of selective attention systems based on neuromorphic WTA circuits ranging from single-chip vision sensors for selecting and tracking the position of salient features, to multi-chip systems implement saliency-map based models of selective attention.

  20. Variable Delay Element For Jitter Control In High Speed Data Links

    DOEpatents

    Livolsi, Robert R.

    2002-06-11

    A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.

  1. A fast BK-type KCa current acts as a postsynaptic modulator of temporal selectivity for communication signals.

    PubMed

    Kohashi, Tsunehiko; Carlson, Bruce A

    2014-01-01

    Temporal patterns of spiking often convey behaviorally relevant information. Various synaptic mechanisms and intrinsic membrane properties can influence neuronal selectivity to temporal patterns of input. However, little is known about how synaptic mechanisms and intrinsic properties together determine the temporal selectivity of neuronal output. We tackled this question by recording from midbrain electrosensory neurons in mormyrid fish, in which the processing of temporal intervals between communication signals can be studied in a reduced in vitro preparation. Mormyrids communicate by varying interpulse intervals (IPIs) between electric pulses. Within the midbrain posterior exterolateral nucleus (ELp), the temporal patterns of afferent spike trains are filtered to establish single-neuron IPI tuning. We performed whole-cell recording from ELp neurons in a whole-brain preparation and examined the relationship between intrinsic excitability and IPI tuning. We found that spike frequency adaptation of ELp neurons was highly variable. Postsynaptic potentials (PSPs) of strongly adapting (phasic) neurons were more sharply tuned to IPIs than weakly adapting (tonic) neurons. Further, the synaptic filtering of IPIs by tonic neurons was more faithfully converted into variation in spiking output, particularly at short IPIs. Pharmacological manipulation under current- and voltage-clamp revealed that tonic firing is mediated by a fast, large-conductance Ca(2+)-activated K(+) (KCa) current (BK) that speeds up action potential repolarization. These results suggest that BK currents can shape the temporal filtering of sensory inputs by modifying both synaptic responses and PSP-to-spike conversion. Slow SK-type KCa currents have previously been implicated in temporal processing. Thus, both fast and slow KCa currents can fine-tune temporal selectivity.

  2. Climate-Smart Seedlot Selection Tool: Reforestation and Restoration for the 21st Century

    NASA Astrophysics Data System (ADS)

    Stevenson-Molnar, N.; Howe, G.; St Clair, B.; Bachelet, D. M.; Ward, B. C.

    2017-12-01

    Local populations of trees are generally adapted to their local climates. Historically, this has meant that local seed zones based on geography and elevation have been used to guide restoration and reforestation. In the face of climate change, seeds from local sources will likely be subjected to climates significantly different from those to which they are currently adapted. The Seedlot Selection Tool (SST) offers a new approach for matching seed sources with planting sites based on future climate scenarios. The SST is a mapping program designed for forest managers and researchers. Users can use the tool to to find seedlots for a given planting site, or to find potential planting sites for a given seedlot. Users select a location (seedlot or planting site), climate scenarios (a climate to which seeds are adapted, and a current or future climate scenario), climate variables, and transfer limits (the maximum climatic distance that is considered a suitable match). Transfer limits are provided by the user, or derived from the range of values within a geographically defined seed zone. The tool calculates scores across the landscape based on an area's similarity, in a multivariate climate space, to the input. Users can explore results on an interactive map, and export PDF and PowerPoint reports, including a map of the results along with the inputs used. Planned future improvements include support for non-forest use cases and ability to download results as GeoTIFF data. The Seedlot Selection Tool and its source code are available online at https://seedlotselectiontool.org. It is co-developed by the United States Forest Service, Oregon State University, and the Conservation Biology Institute.

  3. Spatio-temporal Dynamics of Referential and Inferential Naming: Different Brain and Cognitive Operations to Lexical Selection.

    PubMed

    Fargier, Raphaël; Laganaro, Marina

    2017-03-01

    Picture naming tasks are largely used to elicit the production of specific words and sentences in psycholinguistic and neuroimaging research. However, the generation of lexical concepts from a visual input is clearly not the exclusive way speech production is triggered. In inferential speech encoding, the concept is not provided from a visual input, but is elaborated though semantic and/or episodic associations. It is therefore likely that the cognitive operations leading to lexical selection and word encoding are different in inferential and referential expressive language. In particular, in picture naming lexical selection might ensue from a simple association between a perceptual visual representation and a word with minimal semantic processes, whereas richer semantic associations are involved in lexical retrieval in inferential situations. Here we address this hypothesis by analyzing ERP correlates during word production in a referential and an inferential task. The participants produced the same words elicited from pictures or from short written definitions. The two tasks displayed similar electrophysiological patterns only in the time-period preceding the verbal response. In the stimulus-locked ERPs waveform amplitudes and periods of stable global electrophysiological patterns differed across tasks after the P100 component and until 400-500 ms, suggesting the involvement of different, task-specific neural networks. Based on the analysis of the time-windows affected by specific semantic and lexical variables in each task, we conclude that lexical selection is underpinned by a different set of conceptual and brain processes, with semantic processes clearly preceding word retrieval in naming from definition whereas the semantic information is enriched in parallel with word retrieval in picture naming.

  4. System and method for controlling ammonia levels in a selective catalytic reduction catalyst using a nitrogen oxide sensor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    A system according to the principles of the present disclosure includes an air/fuel ratio determination module and an emission level determination module. The air/fuel ratio determination module determines an air/fuel ratio based on input from an air/fuel ratio sensor positioned downstream from a three-way catalyst that is positioned upstream from a selective catalytic reduction (SCR) catalyst. The emission level determination module selects one of a predetermined value and an input based on the air/fuel ratio. The input is received from a nitrogen oxide sensor positioned downstream from the three-way catalyst. The emission level determination module determines an ammonia level basedmore » on the one of the predetermined value and the input received from the nitrogen oxide sensor.« less

  5. Applying decision tree for identification of a low risk population for type 2 diabetes. Tehran Lipid and Glucose Study.

    PubMed

    Ramezankhani, Azra; Pournik, Omid; Shahrabi, Jamal; Khalili, Davood; Azizi, Fereidoun; Hadaegh, Farzad

    2014-09-01

    The aim of this study was to create a prediction model using data mining approach to identify low risk individuals for incidence of type 2 diabetes, using the Tehran Lipid and Glucose Study (TLGS) database. For a 6647 population without diabetes, aged ≥20 years, followed for 12 years, a prediction model was developed using classification by the decision tree technique. Seven hundred and twenty-nine (11%) diabetes cases occurred during the follow-up. Predictor variables were selected from demographic characteristics, smoking status, medical and drug history and laboratory measures. We developed the predictive models by decision tree using 60 input variables and one output variable. The overall classification accuracy was 90.5%, with 31.1% sensitivity, 97.9% specificity; and for the subjects without diabetes, precision and f-measure were 92% and 0.95, respectively. The identified variables included fasting plasma glucose, body mass index, triglycerides, mean arterial blood pressure, family history of diabetes, educational level and job status. In conclusion, decision tree analysis, using routine demographic, clinical, anthropometric and laboratory measurements, created a simple tool to predict individuals at low risk for type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  6. A liquid lens switching-based motionless variable fiber-optic delay line

    NASA Astrophysics Data System (ADS)

    Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz

    2018-05-01

    We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.

  7. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  8. Quantitative structure-activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods.

    PubMed

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure-activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7-7-1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure-activity relationship model suggested is robust and satisfactory.

  9. Quantitative structure–activity relationship study of P2X7 receptor inhibitors using combination of principal component analysis and artificial intelligence methods

    PubMed Central

    Ahmadi, Mehdi; Shahlaei, Mohsen

    2015-01-01

    P2X7 antagonist activity for a set of 49 molecules of the P2X7 receptor antagonists, derivatives of purine, was modeled with the aid of chemometric and artificial intelligence techniques. The activity of these compounds was estimated by means of combination of principal component analysis (PCA), as a well-known data reduction method, genetic algorithm (GA), as a variable selection technique, and artificial neural network (ANN), as a non-linear modeling method. First, a linear regression, combined with PCA, (principal component regression) was operated to model the structure–activity relationships, and afterwards a combination of PCA and ANN algorithm was employed to accurately predict the biological activity of the P2X7 antagonist. PCA preserves as much of the information as possible contained in the original data set. Seven most important PC's to the studied activity were selected as the inputs of ANN box by an efficient variable selection method, GA. The best computational neural network model was a fully-connected, feed-forward model with 7−7−1 architecture. The developed ANN model was fully evaluated by different validation techniques, including internal and external validation, and chemical applicability domain. All validations showed that the constructed quantitative structure–activity relationship model suggested is robust and satisfactory. PMID:26600858

  10. AC Resonant charger with charge rate unrelated to primary power frequency

    DOEpatents

    Watson, Harold

    1982-01-01

    An AC resonant charger for a capacitive load, such as a PFN, is provided with a variable repetition rate unrelated to the frequency of a multi-phase AC power source by using a control unit to select and couple the phase of the power source to the resonant charger in order to charge the capacitive load with a phase that is the next to begin a half cycle. For optimum range in repetition rate and increased charging voltage, the resonant charger includes a step-up transformer and full-wave rectifier. The next phase selected may then be of either polarity, but is always selected to be of a polarity opposite the polarity of the last phase selected so that the transformer core does not saturate. Thyristors are used to select and couple the correct phase just after its zero crossover in response to a sharp pulse generated by a zero-crossover detector. The thyristor that is turned on then automatically turns off after a full half cycle of its associated phase input. A full-wave rectifier couples the secondary winding of the transformer to the load so that the load capacitance is always charged with the same polarity.

  11. Ac resonant charger with charge rate unrelated to preimary power requency

    DOEpatents

    Not Available

    1979-12-07

    An ac resonant charger for a capacitive load, such as a pulse forming network (PFN), is provided with a variable repetition rate unrelated to the frequency of a multi-phase ac power source by using a control unit to select and couple the phase of the power source to the resonant charger in order to charge the capacitive load with a phase that is the next to begin a half cycle. For optimum range in repetition rate and increased charging voltage, the resonant charger includes a step-up transformer and full-wave rectifier. The next phase selected may then be of either polarity, but is always selected to be of a polarity opposite the polarity of the last phase selected so that the transformer core does not saturate. Thyristors are used to select and couple the correct phase just after its zero crossover in response to a sharp pulse generated by a zero-crossover detector. The thyristor that is turned on then automatically turns off after a full half cycle of its associated phase input. A full-wave rectifier couples the secondary winding of the transformer to the load so that the load capacitance is always charged with the same polarity.

  12. Prediction of heat capacity of amine solutions using artificial neural network and thermodynamic models for CO2 capture processes

    NASA Astrophysics Data System (ADS)

    Afkhamipour, Morteza; Mofarahi, Masoud; Borhani, Tohid Nejad Ghaffar; Zanganeh, Masoud

    2018-03-01

    In this study, artificial neural network (ANN) and thermodynamic models were developed for prediction of the heat capacity ( C P ) of amine-based solvents. For ANN model, independent variables such as concentration, temperature, molecular weight and CO2 loading of amine were selected as the inputs of the model. The significance of the input variables of the ANN model on the C P values was investigated statistically by analyzing of correlation matrix. A thermodynamic model based on the Redlich-Kister equation was used to correlate the excess molar heat capacity ({C}_P^E) data as function of temperature. In addition, the effects of temperature and CO2 loading at different concentrations of conventional amines on the C P values were investigated. Both models were validated against experimental data and very good results were obtained between two mentioned models and experimental data of C P collected from various literatures. The AARD between ANN model results and experimental data of C P for 47 systems of amine-based solvents studied was 4.3%. For conventional amines, the AARD for ANN model and thermodynamic model in comparison with experimental data were 0.59% and 0.57%, respectively. The results showed that both ANN and Redlich-Kister models can be used as a practical tool for simulation and designing of CO2 removal processes by using amine solutions.

  13. [Research on spectra recognition method for cabbages and weeds based on PCA and SIMCA].

    PubMed

    Zu, Qin; Deng, Wei; Wang, Xiu; Zhao, Chun-Jiang

    2013-10-01

    In order to improve the accuracy and efficiency of weed identification, the difference of spectral reflectance was employed to distinguish between crops and weeds. Firstly, the different combinations of Savitzky-Golay (SG) convolutional derivation and multiplicative scattering correction (MSC) method were applied to preprocess the raw spectral data. Then the clustering analysis of various types of plants was completed by using principal component analysis (PCA) method, and the feature wavelengths which were sensitive for classifying various types of plants were extracted according to the corresponding loading plots of the optimal principal components in PCA results. Finally, setting the feature wavelengths as the input variables, the soft independent modeling of class analogy (SIMCA) classification method was used to identify the various types of plants. The experimental results of classifying cabbages and weeds showed that on the basis of the optimal pretreatment by a synthetic application of MSC and SG convolutional derivation with SG's parameters set as 1rd order derivation, 3th degree polynomial and 51 smoothing points, 23 feature wavelengths were extracted in accordance with the top three principal components in PCA results. When SIMCA method was used for classification while the previously selected 23 feature wavelengths were set as the input variables, the classification rates of the modeling set and the prediction set were respectively up to 98.6% and 100%.

  14. Dynamic optimization of open-loop input signals for ramp-up current profiles in tokamak plasmas

    NASA Astrophysics Data System (ADS)

    Ren, Zhigang; Xu, Chao; Lin, Qun; Loxton, Ryan; Teo, Kok Lay

    2016-03-01

    Establishing a good current spatial profile in tokamak fusion reactors is crucial to effective steady-state operation. The evolution of the current spatial profile is related to the evolution of the poloidal magnetic flux, which can be modeled in the normalized cylindrical coordinates using a parabolic partial differential equation (PDE) called the magnetic diffusion equation. In this paper, we consider the dynamic optimization problem of attaining the best possible current spatial profile during the ramp-up phase of the tokamak. We first use the Galerkin method to obtain a finite-dimensional ordinary differential equation (ODE) model based on the original magnetic diffusion PDE. Then, we combine the control parameterization method with a novel time-scaling transformation to obtain an approximate optimal parameter selection problem, which can be solved using gradient-based optimization techniques such as sequential quadratic programming (SQP). This control parameterization approach involves approximating the tokamak input signals by piecewise-linear functions whose slopes and break-points are decision variables to be optimized. We show that the gradient of the objective function with respect to the decision variables can be computed by solving an auxiliary dynamic system governing the state sensitivity matrix. Finally, we conclude the paper with simulation results for an example problem based on experimental data from the DIII-D tokamak in San Diego, California.

  15. Abnormal externally guided movement preparation in recent-onset schizophrenia is associated with impaired selective attention to external input.

    PubMed

    Smid, Henderikus G O M; Westenbroek, Joanna M; Bruggeman, Richard; Knegtering, Henderikus; Van den Bosch, Robert J

    2009-11-30

    Several theories propose that the primary cognitive impairment in schizophrenia concerns a deficit in the processing of external input information. There is also evidence, however, for impaired motor preparation in schizophrenia. This provokes the question whether the impaired motor preparation in schizophrenia is a secondary consequence of disturbed (selective) processing of the input needed for that preparation, or an independent primary deficit. The aim of the present study was to discriminate between these hypotheses, by investigating externally guided movement preparation in relation to selective stimulus processing. The sample comprised 16 recent-onset schizophrenia patients and 16 controls who performed a movement-precuing task. In this task, a precue delivered information about one, two or no parameters of a movement summoned by a subsequent stimulus. Performance measures and measures derived from the electroencephalogram showed that patients yielded smaller benefits from the precues and showed less cue-based preparatory activity in advance of the imperative stimulus than the controls, suggesting a response preparation deficit. However, patients also showed less activity reflecting selective attention to the precue. We therefore conclude that the existing evidence for an impairment of externally guided motor preparation in schizophrenia is most likely due to a deficit in selective attention to the external input, which lends support to theories proposing that the primary cognitive deficit in schizophrenia concerns the processing of input information.

  16. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection.

    PubMed

    Tong, Xiayu; Wang, Zhou-Jing

    2016-09-19

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers' judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice.

  17. A Group Decision Framework with Intuitionistic Preference Relations and Its Application to Low Carbon Supplier Selection

    PubMed Central

    Tong, Xiayu; Wang, Zhou-Jing

    2016-01-01

    This article develops a group decision framework with intuitionistic preference relations. An approach is first devised to rectify an inconsistent intuitionistic preference relation to derive an additive consistent one. A new aggregation operator, the so-called induced intuitionistic ordered weighted averaging (IIOWA) operator, is proposed to aggregate individual intuitionistic fuzzy judgments. By using the mean absolute deviation between the original and rectified intuitionistic preference relations as an order inducing variable, the rectified consistent intuitionistic preference relations are aggregated into a collective preference relation. This treatment is presumably able to assign different weights to different decision-makers’ judgments based on the quality of their inputs (in terms of consistency of their original judgments). A solution procedure is then developed for tackling group decision problems with intuitionistic preference relations. A low carbon supplier selection case study is developed to illustrate how to apply the proposed decision model in practice. PMID:27657097

  18. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  19. Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number

    NASA Astrophysics Data System (ADS)

    Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo

    Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.

  20. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    USGS Publications Warehouse

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-01-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.

  1. Biogeochemical typology and temporal variability of lagoon waters in a coral reef ecosystem subject to terrigeneous and anthropogenic inputs (New Caledonia).

    PubMed

    Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C

    2010-01-01

    Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  2. [Discrimination of varieties of brake fluid using visual-near infrared spectra].

    PubMed

    Jiang, Lu-lu; Tan, Li-hong; Qiu, Zheng-jun; Lu, Jiang-feng; He, Yong

    2008-06-01

    A new method was developed to fast discriminate brands of brake fluid by means of visual-near infrared spectroscopy. Five different brands of brake fluid were analyzed using a handheld near infrared spectrograph, manufactured by ASD Company, and 60 samples were gotten from each brand of brake fluid. The samples data were pretreated using average smoothing and standard normal variable method, and then analyzed using principal component analysis (PCA). A 2-dimensional plot was drawn based on the first and the second principal components, and the plot indicated that the clustering characteristic of different brake fluid is distinct. The foregoing 6 principal components were taken as input variable, and the band of brake fluid as output variable to build the discriminate model by stepwise discriminant analysis method. Two hundred twenty five samples selected randomly were used to create the model, and the rest 75 samples to verify the model. The result showed that the distinguishing rate was 94.67%, indicating that the method proposed in this paper has good performance in classification and discrimination. It provides a new way to fast discriminate different brands of brake fluid.

  3. Variable frequency microwave furnace system

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bible, D.W.; Lauf, R.J.

    1994-06-14

    A variable frequency microwave furnace system designed to allow modulation of the frequency of the microwaves introduced into a furnace cavity for testing or other selected applications. The variable frequency microwave furnace system includes a microwave signal generator or microwave voltage-controlled oscillator for generating a low-power microwave signal for input to the microwave furnace. A first amplifier may be provided to amplify the magnitude of the signal output from the microwave signal generator or the microwave voltage-controlled oscillator. A second amplifier is provided for processing the signal output by the first amplifier. The second amplifier outputs the microwave signal inputmore » to the furnace cavity. In the preferred embodiment, the second amplifier is a traveling-wave tube (TWT). A power supply is provided for operation of the second amplifier. A directional coupler is provided for detecting the direction of a signal and further directing the signal depending on the detected direction. A first power meter is provided for measuring the power delivered to the microwave furnace. A second power meter detects the magnitude of reflected power. Reflected power is dissipated in the reflected power load. 5 figs.« less

  4. Prediction of Hematopoietic Stem Cell Transplantation Related Mortality- Lessons Learned from the In-Silico Approach: A European Society for Blood and Marrow Transplantation Acute Leukemia Working Party Data Mining Study.

    PubMed

    Shouval, Roni; Labopin, Myriam; Unger, Ron; Giebel, Sebastian; Ciceri, Fabio; Schmid, Christoph; Esteve, Jordi; Baron, Frederic; Gorin, Norbert Claude; Savani, Bipin; Shimoni, Avichai; Mohty, Mohamad; Nagler, Arnon

    2016-01-01

    Models for prediction of allogeneic hematopoietic stem transplantation (HSCT) related mortality partially account for transplant risk. Improving predictive accuracy requires understating of prediction limiting factors, such as the statistical methodology used, number and quality of features collected, or simply the population size. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to analyze these factors. A cohort of 25,923 adult acute leukemia patients from the European Society for Blood and Marrow Transplantation (EBMT) registry was analyzed. Predictive objective was non-relapse mortality (NRM) 100 days following HSCT. Thousands of prediction models were developed under varying conditions: increasing sample size, specific subpopulations and an increasing number of variables, which were selected and ranked by separate feature selection algorithms. Depending on the algorithm, predictive performance plateaued on a population size of 6,611-8,814 patients, reaching a maximal area under the receiver operator characteristic curve (AUC) of 0.67. AUCs' of models developed on specific subpopulation ranged from 0.59 to 0.67 for patients in second complete remission and receiving reduced intensity conditioning, respectively. Only 3-5 variables were necessary to achieve near maximal AUCs. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning regimen. Our findings empirically demonstrate that with regards to NRM prediction, few variables "carry the weight" and that traditional HSCT data has been "worn out". "Breaking through" the predictive boundaries will likely require additional types of inputs.

  5. A High-Performance Reconfigurable Fabric for Cognitive Information Processing

    DTIC Science & Technology

    2010-12-01

    receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select an input port. The...dual of a merge. It receives a data token from its control input (shown as a horizontal arrow above). The value of this data token is used to select...Computer-Aided Design of Intergrated Circuits and Systems, Vol. 26, No. 2, February 2007. [12] Cadence Design Systems. Clock Domain Crossing: Closing the

  6. Kanerva's sparse distributed memory with multiple hamming thresholds

    NASA Technical Reports Server (NTRS)

    Pohja, Seppo; Kaski, Kimmo

    1992-01-01

    If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.

  7. Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system

    NASA Astrophysics Data System (ADS)

    Adamatzky, Andrew

    2015-09-01

    In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.

  8. UWB delay and multiply receiver

    DOEpatents

    Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.

    2013-09-10

    An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.

  9. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  10. Effects of uncertainties in hydrological modelling. A case study of a mountainous catchment in Southern Norway

    NASA Astrophysics Data System (ADS)

    Engeland, Kolbjørn; Steinsland, Ingelin; Johansen, Stian Solvang; Petersen-Øverleir, Asgeir; Kolberg, Sjur

    2016-05-01

    In this study, we explore the effect of uncertainty and poor observation quality on hydrological model calibration and predictions. The Osali catchment in Western Norway was selected as case study and an elevation distributed HBV-model was used. We systematically evaluated the effect of accounting for uncertainty in parameters, precipitation input, temperature input and streamflow observations. For precipitation and temperature we accounted for the interpolation uncertainty, and for streamflow we accounted for rating curve uncertainty. Further, the effects of poorer quality of precipitation input and streamflow observations were explored. Less information about precipitation was obtained by excluding the nearest precipitation station from the analysis, while reduced information about the streamflow was obtained by omitting the highest and lowest streamflow observations when estimating the rating curve. The results showed that including uncertainty in the precipitation and temperature inputs has a negligible effect on the posterior distribution of parameters and for the Nash-Sutcliffe (NS) efficiency for the predicted flows, while the reliability and the continuous rank probability score (CRPS) improves. Less information in precipitation input resulted in a shift in the water balance parameter Pcorr, a model producing smoother streamflow predictions, giving poorer NS and CRPS, but higher reliability. The effect of calibrating the hydrological model using streamflow observations based on different rating curves is mainly seen as variability in the water balance parameter Pcorr. When evaluating predictions, the best evaluation scores were not achieved for the rating curve used for calibration, but for rating curves giving smoother streamflow observations. Less information in streamflow influenced the water balance parameter Pcorr, and increased the spread in evaluation scores by giving both better and worse scores.

  11. A Multivariate Analysis of the Early Dropout Process

    ERIC Educational Resources Information Center

    Fiester, Alan R.; Rudestam, Kjell E.

    1975-01-01

    Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)

  12. Finding diversity for building one-day ahead Hydrological Ensemble Prediction System based on artificial neural network stacks

    NASA Astrophysics Data System (ADS)

    Brochero, Darwin; Anctil, Francois; Gagné, Christian; López, Karol

    2013-04-01

    In this study, we addressed the application of Artificial Neural Networks (ANN) in the context of Hydrological Ensemble Prediction Systems (HEPS). Such systems have become popular in the past years as a tool to include the forecast uncertainty in the decision making process. HEPS considers fundamentally the uncertainty cascade model [4] for uncertainty representation. Analogously, the machine learning community has proposed models of multiple classifier systems that take into account the variability in datasets, input space, model structures, and parametric configuration [3]. This approach is based primarily on the well-known "no free lunch theorem" [1]. Consequently, we propose a framework based on two separate but complementary topics: data stratification and input variable selection (IVS). Thus, we promote an ANN prediction stack in which each predictor is trained based on input spaces defined by the IVS application on different stratified sub-samples. All this, added to the inherent variability of classical ANN optimization, leads us to our ultimate goal: diversity in the prediction, defined as the complementarity of the individual predictors. The stratification application on the 12 basins used in this study, which originate from the second and third workshop of the MOPEX project [2], shows that the informativeness of the data is far more important than the quantity used for ANN training. Additionally, the input space variability leads to ANN stacks that outperform an ANN stack model trained with 100% of the available information but with a random selection of dataset used in the early stopping method (scenario R100P). The results show that from a deterministic view, the main advantage focuses on the efficient selection of the training information, which is an equally important concept for the calibration of conceptual hydrological models. On the other hand, the diversity achieved is reflected in a substantial improvement in the scores that define the probabilistic quality of the HEPS. Except one basin that shows an atypical behaviour, and two other basins that represent the difficulty of prediction in semiarid areas, the average gain obtained with the new scheme relative to the R100P scenario is around 8%, 134%, 72%, and 69% for the mean CRPS, the mean ignorance score, the MSE evaluated on the reliability diagram, and the delta ratio respectively. Note that in all cases, the CRPS is less than the MAE, which indicates that the ensemble of neural networks performs better when taken as a whole than when aggregated in a single averaged predictor. Finally, we consider appropriate to complement the proposed methodology in two fronts: one deterministic, in which prediction could come from a Bayesian combination, and the second probabilistic, in which scores optimization could be based on an "overproduce and select" process. Also, in the case of the basins in semiarid areas, the results found by Vos [5] with echo state networks using the same database analysed in this study, leads us to consider the need to include various structures in the ANN stack. References [1] Corne, D. W. and Knowles, J. D.: No free lunch and free leftovers theorems for multiobjective optimisation problems. in Proceedings of the 2nd international conference on Evolutionary multi-criterion optimization, Springer-Verlag, 327-341, 2003. [2] Duan, Q.; Schaake, J.; Andréassian, V.; Franks, S.; Goteti, G.; Gupta, H.; Gusev, Y.; Habets, F.; Hall, A.; Hay, L.; Hogue, T.; Huang, M.; Leavesley, G.; Liang, X.; Nasonova, O.; Noilhan, J.; Oudin, L.; Sorooshian, S.; Wagener, T. and Wood, E.: Model Parameter Estimation Experiment (MOPEX): An overview of science strategy and major results from the second and third workshops. J. Hydrol., 320, 3-17, 2006. [3] Kuncheva, L. I.: Combining Pattern Classifiers: Methods and Algorithms, Wiley-Interscience, 2004. [4] Pappenberger, F., Beven, K. J., Hunter, N. M., Bates, P. D., Gouweleeuw, B. T., Thielen, J., and de Roo, A. P. J.: Cascading model uncertainty from medium range weather forecasts (10 days) through a rainfall-runoff model to flood inundation predictions within the European Flood Forecasting System (EFFS), Hydrol. Earth Syst. Sci., 9, 381-393, 2005. [5] de Vos, N. J.: Reservoir computing as an alternative to traditional artificial neural networks in rainfall-runoff modelling Hydrol. Earth Syst. Sci. Discuss., 9, 6101-6134, 2012.

  13. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  14. Variable ratio regenerative braking device

    DOEpatents

    Hoppie, Lyle O.

    1981-12-15

    Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.

  15. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  16. Dynamics of vehicles in variable velocity runs over non-homogeneous flexible track and foundation with two point input models

    NASA Astrophysics Data System (ADS)

    Yadav, D.; Upadhyay, H. C.

    1992-07-01

    Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.

  17. Development and evaluation of height diameter at breast models for native Chinese Metasequoia.

    PubMed

    Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.

  18. Development and evaluation of height diameter at breast models for native Chinese Metasequoia

    PubMed Central

    Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600

  19. Delay selection by spike-timing-dependent plasticity in recurrent networks of spiking neurons receiving oscillatory inputs.

    PubMed

    Kerr, Robert R; Burkitt, Anthony N; Thomas, Doreen A; Gilson, Matthieu; Grayden, David B

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem.

  20. Delay Selection by Spike-Timing-Dependent Plasticity in Recurrent Networks of Spiking Neurons Receiving Oscillatory Inputs

    PubMed Central

    Kerr, Robert R.; Burkitt, Anthony N.; Thomas, Doreen A.; Gilson, Matthieu; Grayden, David B.

    2013-01-01

    Learning rules, such as spike-timing-dependent plasticity (STDP), change the structure of networks of neurons based on the firing activity. A network level understanding of these mechanisms can help infer how the brain learns patterns and processes information. Previous studies have shown that STDP selectively potentiates feed-forward connections that have specific axonal delays, and that this underlies behavioral functions such as sound localization in the auditory brainstem of the barn owl. In this study, we investigate how STDP leads to the selective potentiation of recurrent connections with different axonal and dendritic delays during oscillatory activity. We develop analytical models of learning with additive STDP in recurrent networks driven by oscillatory inputs, and support the results using simulations with leaky integrate-and-fire neurons. Our results show selective potentiation of connections with specific axonal delays, which depended on the input frequency. In addition, we demonstrate how this can lead to a network becoming selective in the amplitude of its oscillatory response to this frequency. We extend this model of axonal delay selection within a single recurrent network in two ways. First, we show the selective potentiation of connections with a range of both axonal and dendritic delays. Second, we show axonal delay selection between multiple groups receiving out-of-phase, oscillatory inputs. We discuss the application of these models to the formation and activation of neuronal ensembles or cell assemblies in the cortex, and also to missing fundamental pitch perception in the auditory brainstem. PMID:23408878

  1. Electrical Advantages of Dendritic Spines

    PubMed Central

    Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.

    2012-01-01

    Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875

  2. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  3. Predicting language outcomes for children learning AAC: Child and environmental factors

    PubMed Central

    Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris

    2014-01-01

    Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187

  4. To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.

    PubMed

    Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke

    2013-01-01

    Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.

  5. Interacting with notebook input devices: an analysis of motor performance and users' expertise.

    PubMed

    Sutter, Christine; Ziefle, Martina

    2005-01-01

    In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.

  6. A comprehensive analysis of earthquake damage patterns using high dimensional model representation feature selection

    NASA Astrophysics Data System (ADS)

    Taşkin Kaya, Gülşen

    2013-10-01

    Recently, earthquake damage assessment using satellite images has been a very popular ongoing research direction. Especially with the availability of very high resolution (VHR) satellite images, a quite detailed damage map based on building scale has been produced, and various studies have also been conducted in the literature. As the spatial resolution of satellite images increases, distinguishability of damage patterns becomes more cruel especially in case of using only the spectral information during classification. In order to overcome this difficulty, textural information needs to be involved to the classification to improve the visual quality and reliability of damage map. There are many kinds of textural information which can be derived from VHR satellite images depending on the algorithm used. However, extraction of textural information and evaluation of them have been generally a time consuming process especially for the large areas affected from the earthquake due to the size of VHR image. Therefore, in order to provide a quick damage map, the most useful features describing damage patterns needs to be known in advance as well as the redundant features. In this study, a very high resolution satellite image after Iran, Bam earthquake was used to identify the earthquake damage. Not only the spectral information, textural information was also used during the classification. For textural information, second order Haralick features were extracted from the panchromatic image for the area of interest using gray level co-occurrence matrix with different size of windows and directions. In addition to using spatial features in classification, the most useful features representing the damage characteristic were selected with a novel feature selection method based on high dimensional model representation (HDMR) giving sensitivity of each feature during classification. The method called HDMR was recently proposed as an efficient tool to capture the input-output relationships in high-dimensional systems for many problems in science and engineering. The HDMR method is developed to improve the efficiency of the deducing high dimensional behaviors. The method is formed by a particular organization of low dimensional component functions, in which each function is the contribution of one or more input variables to the output variables.

  7. A Web Browsing System by Eye-gaze Input

    NASA Astrophysics Data System (ADS)

    Abe, Kiyohiko; Owada, Kosuke; Ohi, Shoichi; Ohyama, Minoru

    We have developed an eye-gaze input system for people with severe physical disabilities, such as amyotrophic lateral sclerosis (ALS) patients. This system utilizes a personal computer and a home video camera to detect eye-gaze under natural light. The system detects both vertical and horizontal eye-gaze by simple image analysis, and does not require special image processing units or sensors. We also developed the platform for eye-gaze input based on our system. In this paper, we propose a new web browsing system for physically disabled computer users as an application of the platform for eye-gaze input. The proposed web browsing system uses a method of direct indicator selection. The method categorizes indicators by their function. These indicators are hierarchized relations; users can select the felicitous function by switching indicators group. This system also analyzes the location of selectable object on web page, such as hyperlink, radio button, edit box, etc. This system stores the locations of these objects, in other words, the mouse cursor skips to the object of candidate input. Therefore it enables web browsing at a faster pace.

  8. Alternative Computer Access for Young Handicapped Children: A Systematic Selection Procedure.

    ERIC Educational Resources Information Center

    Morris, Karen J.

    The paper describes the type of computer access products appropriate for use by handicapped children and presents a systematic procedure for selection of such input and output devices. Modification of computer input is accomplished by three strategies: modifying the keyboard, adding alternative keyboards, and attaching switches to the keyboard.…

  9. Where do we store the memory representations that guide attention?

    PubMed Central

    Woodman, Geoffrey F.; Carlisle, Nancy B.; Reinhart, Robert M. G.

    2013-01-01

    During the last decade one of the most contentious and heavily studied topics in the attention literature has been the role that working memory representations play in controlling perceptual selection. The hypothesis has been advanced that to have attention select a certain perceptual input from the environment, we only need to represent that item in working memory. Here we summarize the work indicating that the relationship between what representations are maintained in working memory and what perceptual inputs are selected is not so simple. First, it appears that attentional selection is also determined by high-level task goals that mediate the relationship between working memory storage and attentional selection. Second, much of the recent work from our laboratory has focused on the role of long-term memory in controlling attentional selection. We review recent evidence supporting the proposal that working memory representations are critical during the initial configuration of attentional control settings, but that after those settings are established long-term memory representations play an important role in controlling which perceptual inputs are selected by mechanisms of attention. PMID:23444390

  10. Boosting Learning Algorithm for Stock Price Forecasting

    NASA Astrophysics Data System (ADS)

    Wang, Chengzhang; Bai, Xiaoming

    2018-03-01

    To tackle complexity and uncertainty of stock market behavior, more studies have introduced machine learning algorithms to forecast stock price. ANN (artificial neural network) is one of the most successful and promising applications. We propose a boosting-ANN model in this paper to predict the stock close price. On the basis of boosting theory, multiple weak predicting machines, i.e. ANNs, are assembled to build a stronger predictor, i.e. boosting-ANN model. New error criteria of the weak studying machine and rules of weights updating are adopted in this study. We select technical factors from financial markets as forecasting input variables. Final results demonstrate the boosting-ANN model works better than other ones for stock price forecasting.

  11. A meteorologically-driven yield reduction model for spring and winter wheat

    NASA Technical Reports Server (NTRS)

    Ravet, F. W.; Cremins, W. J.; Taylor, T. W.; Ashburn, P.; Smika, D.; Aaronson, A. (Principal Investigator)

    1983-01-01

    A yield reduction model for spring and winter wheat was developed for large-area crop condition assessment. Reductions are expressed in percentage from a base yield and are calculated on a daily basis. The algorithm contains two integral components: a two-layer soil water budget model and a crop calendar routine. Yield reductions associated with hot, dry winds (Sukhovey) and soil moisture stress are determined. Input variables include evapotranspiration, maximum temperature and precipitation; subsequently crop-stage, available water holding percentage and stress duration are evaluated. No specific base yield is required and may be selected by the user; however, it may be generally characterized as the maximum likely to be produced commercially at a location.

  12. The performance of diphoton primary vertex reconstruction methods in H → γγ+Met channel of ATLAS experiment

    NASA Astrophysics Data System (ADS)

    Tomiwa, K. G.

    2017-09-01

    The search for new physics in the H → γγ+met relies on how well the missing transverse energy is reconstructed. The Met algorithm used by the ATLAS experiment in turns uses input variables like photon and jets which depend on the reconstruction of the primary vertex. This document presents the performance of di-photon vertex reconstruction algorithms (hardest vertex method and Neural Network method). Comparing the performance of these algorithms for the nominal Standard Model sample and the Beyond Standard Model sample, we see the overall performance of the Neural Network method of primary vertex selection performed better than the Hardest vertex method.

  13. An assessment of support vector machines for land cover classification

    USGS Publications Warehouse

    Huang, C.; Davis, L.S.; Townshend, J.R.G.

    2002-01-01

    The support vector machine (SVM) is a group of theoretically superior machine learning algorithms. It was found competitive with the best available machine learning algorithms in classifying high-dimensional data sets. This paper gives an introduction to the theoretical development of the SVM and an experimental evaluation of its accuracy, stability and training speed in deriving land cover classifications from satellite images. The SVM was compared to three other popular classifiers, including the maximum likelihood classifier (MLC), neural network classifiers (NNC) and decision tree classifiers (DTC). The impacts of kernel configuration on the performance of the SVM and of the selection of training data and input variables on the four classifiers were also evaluated in this experiment.

  14. Stochastic Modeling of the Environmental Impacts of the Mingtang Tunneling Project

    NASA Astrophysics Data System (ADS)

    Li, Xiaojun; Li, Yandong; Chang, Ching-Fu; Chen, Ziyang; Tan, Benjamin Zhi Wen; Sege, Jon; Wang, Changhong; Rubin, Yoram

    2017-04-01

    This paper investigates the environmental impacts of a major tunneling project in China. Of particular interest is the drawdown of the water table, due to its potential impacts on ecosystem health and on agricultural activity. Due to scarcity of data, the study pursues a Bayesian stochastic approach, which is built around a numerical model. We adopted the Bayesian approach with the goal of deriving the posterior distributions of the dependent variables conditional on local data. The choice of the Bayesian approach for this study is somewhat non-trivial because of the scarcity of in-situ measurements. The thought guiding this selection is that prior distributions for the model input variables are valuable tools even if that all inputs are available, the Bayesian approach could provide a good starting point for further updates as and if additional data becomes available. To construct effective priors, a systematic approach was developed and implemented for constructing informative priors based on other, well-documented sites which bear geological and hydrological similarity to the target site (the Mingtang tunneling project). The approach is built around two classes of similarity criteria: a physically-based set of criteria and an additional set covering epistemic criteria. The prior construction strategy was implemented for the hydraulic conductivity of various types of rocks at the site (Granite and Gneiss) and for modeling the geometry and conductivity of the fault zones. Additional elements of our strategy include (1) modeling the water table through bounding surfaces representing upper and lower limits, and (2) modeling the effective conductivity as a random variable (varying between realizations, not in space). The approach was tested successfully against its ability to predict the tunnel infiltration fluxes and against observations of drying soils.

  15. Evaluating total inorganic nitrogen in coastal waters through fusion of multi-temporal RADARSAT-2 and optical imagery using random forest algorithm

    NASA Astrophysics Data System (ADS)

    Liu, Meiling; Liu, Xiangnan; Li, Jin; Ding, Chao; Jiang, Jiale

    2014-12-01

    Satellites routinely provide frequent, large-scale, near-surface views of many oceanographic variables pertinent to plankton ecology. However, the nutrient fertility of water can be challenging to detect accurately using remote sensing technology. This research has explored an approach to estimate the nutrient fertility in coastal waters through the fusion of synthetic aperture radar (SAR) images and optical images using the random forest (RF) algorithm. The estimation of total inorganic nitrogen (TIN) in the Hong Kong Sea, China, was used as a case study. In March of 2009 and May and August of 2010, a sequence of multi-temporal in situ data and CCD images from China's HJ-1 satellite and RADARSAT-2 images were acquired. Four sensitive parameters were selected as input variables to evaluate TIN: single-band reflectance, a normalized difference spectral index (NDSI) and HV and VH polarizations. The RF algorithm was used to merge the different input variables from the SAR and optical imagery to generate a new dataset (i.e., the TIN outputs). The results showed the temporal-spatial distribution of TIN. The TIN values decreased from coastal waters to the open water areas, and TIN values in the northeast area were higher than those found in the southwest region of the study area. The maximum TIN values occurred in May. Additionally, the estimation accuracy for estimating TIN was significantly improved when the SAR and optical data were used in combination rather than a single data type alone. This study suggests that this method of estimating nutrient fertility in coastal waters by effectively fusing data from multiple sensors is very promising.

  16. Using a Bayesian network to predict barrier island geomorphologic characteristics

    USGS Publications Warehouse

    Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron

    2015-01-01

    Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.

  17. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  18. Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number

    NASA Technical Reports Server (NTRS)

    Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-01-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  19. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    DOE PAGES

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...

    2016-05-16

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less

  20. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-05-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  1. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  2. A computer program for simulating geohydrologic systems in three dimensions

    USGS Publications Warehouse

    Posson, D.R.; Hearne, G.A.; Tracy, J.V.; Frenzel, P.F.

    1980-01-01

    This document is directed toward individuals who wish to use a computer program to simulate ground-water flow in three dimensions. The strongly implicit procedure (SIP) numerical method is used to solve the set of simultaneous equations. New data processing techniques and program input and output options are emphasized. The quifer system to be modeled may be heterogeneous and anisotropic, and may include both artesian and water-table conditions. Systems which consist of well defined alternating layers of highly permeable and poorly permeable material may be represented by a sequence of equations for two dimensional flow in each of the highly permeable units. Boundaries where head or flux is user-specified may be irregularly shaped. The program also allows the user to represent streams as limited-source boundaries when the streamflow is small in relation to the hydraulic stress on the system. The data-processing techniques relating to ' cube ' input and output, to swapping of layers, to restarting of simulation, to free-format NAMELIST input, to the details of each sub-routine 's logic, and to the overlay program structure are discussed. The program is capable of processing large models that might overflow computer memories with conventional programs. Detailed instructions for selecting program options, for initializing the data arrays, for defining ' cube ' output lists and maps, and for plotting hydrographs of calculated and observed heads and/or drawdowns are provided. Output may be restricted to those nodes of particular interest, thereby reducing the volumes of printout for modelers, which may be critical when working at remote terminals. ' Cube ' input commands allow the modeler to set aquifer parameters and initialize the model with very few input records. Appendixes provide instructions to compile the program, definitions and cross-references for program variables, summary of the FLECS structured FORTRAN programming language, listings of the FLECS and FORTRAN source code, and samples of input and output for example simulations. (USGS)

  3. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  4. EMODnet MedSea Checkpoint for sustainable Blue Growth

    NASA Astrophysics Data System (ADS)

    Moussat, Eric; Pinardi, Nadia; Manzella, Giuseppe; Blanc, Frederique

    2016-04-01

    The EMODNET checkpoint is a wide monitoring system assessment activity aiming to support the sustainable Blue Growth at the scale of the European Sea Basins by: 1) Clarifying the observation landscape of all compartments of the marine environment including Air, Water, Seabed, Biota and Human activities, pointing out to the existing programs, national, European and international 2) Evaluating fitness for use indicators that will show the accessibility and usability of observation and modeling data sets and their roles and synergies based upon selected applications by the European Marine Environment Strategy 3) Prioritizing the needs to optimize the overall monitoring Infrastructure (in situ and satellite data collection and assembling, data management and networking, modeling and forecasting, geo-infrastructure) and release recommendations for evolutions to better meet the application requirements in view of sustainable Blue Growth The assessment is designed for : - Institutional stakeholders for decision making on observation and monitoring systems - Data providers and producers to know how their data collected once for a given purpose could fit other user needs - End-users interested in a regional status and possible uses of existing monitoring data Selected end-user applications are of paramount importance for: (i) the blue economy sector (offshore industries, fisheries); (ii) marine environment variability and change (eutrophication, river inputs and ocean climate change impacts); (iii) emergency management (oil spills); and (iv) preservation of natural resources and biodiversity (Marine Protected Areas). End-user applications generate innovative products based on the existing observation landscape. The fitness for use assessment is made thanks to the comparison of the expected product specifications with the quality of the product derived from the selected data. This involves the development of checkpoint information and indicators based on Data quality and Metadata standards for geographic information (ISO 19157 and ISO 19115 respectively). The fitness for use of the input datasets are assessed using 2 categories of criteria to determine how these datasets fits the user requirements which drive them to select a data source rather than another one and to show performance and gaps of the present monitoring systems : • Data appropriateness : what is made available to the user ?. • Data availability : how it is made available to the user? All information are stored in a GIS platform and made available with two types of interfaces: - Front-end interfaces with users, to present the input data used by all challenges, the innovative products generated by challenges and the assessment indicators. - Back-end interfaces to partners, to store the checkpoint descriptors of input data, specification to generate targeted products, catalogue information of products with associated checkpoint indicators linked to the input data The validation of the records is done at three levels, at technical level (GIS), at challenge level (use), and at sea basin level (synthesis of monitoring data adequacy including expert comments) to end with the production of a yearly Data Adequacy Report.

  5. Influence of Coastal Submarine Groundwater Discharges on Seagrass Communities in a Subtropical Karstic Environment.

    PubMed

    Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F

    2018-01-01

    The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.

  6. Soft sensor for real-time cement fineness estimation.

    PubMed

    Stanišić, Darko; Jorgovanović, Nikola; Popov, Nikola; Čongradac, Velimir

    2015-03-01

    This paper describes the design and implementation of soft sensors to estimate cement fineness. Soft sensors are mathematical models that use available data to provide real-time information on process variables when the information, for whatever reason, is not available by direct measurement. In this application, soft sensors are used to provide information on process variable normally provided by off-line laboratory tests performed at large time intervals. Cement fineness is one of the crucial parameters that define the quality of produced cement. Providing real-time information on cement fineness using soft sensors can overcome limitations and problems that originate from a lack of information between two laboratory tests. The model inputs were selected from candidate process variables using an information theoretic approach. Models based on multi-layer perceptrons were developed, and their ability to estimate cement fineness of laboratory samples was analyzed. Models that had the best performance, and capacity to adopt changes in the cement grinding circuit were selected to implement soft sensors. Soft sensors were tested using data from a continuous cement production to demonstrate their use in real-time fineness estimation. Their performance was highly satisfactory, and the sensors proved to be capable of providing valuable information on cement grinding circuit performance. After successful off-line tests, soft sensors were implemented and installed in the control room of a cement factory. Results on the site confirm results obtained by tests conducted during soft sensor development. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Method and apparatus for smart battery charging including a plurality of controllers each monitoring input variables

    DOEpatents

    Hammerstrom, Donald J.

    2013-10-15

    A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.

  8. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  9. Development of economic and environmental metrics for forest-based biomass harvesting

    NASA Astrophysics Data System (ADS)

    Zhang, F. L.; Wang, J. J.; Liu, S. H.; Zhang, S. M.

    2016-08-01

    An assessment of the economic, energy consumption, and greenhouse gas (GHG) emission dimensions of forest-based biomass harvest stage in the state of Michigan, U.S. through gathering data from literature, database, and other relevant sources, was performed. The assessment differentiates harvesting systems (cut-to-length harvesting, whole tree harvesting, and motor-manual harvesting), harvest types (30%, 70%, and 100% cut) and forest types (hardwoods, softwoods, mixed hardwood/softwood, and softwood plantations) that characterize Michigan's logging industry. Machine rate methods were employed to determine unit harvesting cost. A life cycle inventory was applied to calculating energy demand and GHG emissions of different harvesting scenarios, considering energy and material inputs (diesel, machinery, etc.) and outputs (emissions) for each process (cutting, forwarding/skidding, etc.). A sensitivity analysis was performed for selected input variables for the harvesting operation in order to explore their relative importance. The results indicated that productivity had the largest impact on harvesting cost followed by machinery purchase price, yearly scheduled hours, and expected utilization. Productivity and fuel use, as well as fuel factors, are the most influential environmental impacts of harvesting operations.

  10. Sensory noise predicts divisive reshaping of receptive fields

    PubMed Central

    Deneve, Sophie; Gutkin, Boris

    2017-01-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics. PMID:28622330

  11. A comparison of river discharge calculated by using a regional climate model output with different reanalysis datasets in 1980s and 1990s

    NASA Astrophysics Data System (ADS)

    Ma, X.; Yoshikane, T.; Hara, M.; Adachi, S. A.; Wakazuki, Y.; Kawase, H.; Kimura, F.

    2014-12-01

    To check the influence of boundary input data on a modeling result, we had a numerical investigation of river discharge by using runoff data derived by a regional climate model with a 4.5-km resolution as input data to a hydrological model. A hindcast experiment, which to reproduce the current climate was carried out for the two decades, 1980s and 1990s. We used the Advanced Research WRF (ARW) (ver. 3.2.1) with a two-way nesting technique and the WRF single-moment 6-class microphysics scheme. Noah-LSM is adopted to simulate the land surface process. The NCEP/NCAR and ERA-Interim 6-hourly reanalysis datasets were used as the lateral boundary condition for the runs, respectively. The output variables used for river discharge simulation from the WRF model were underground runoff and surface runoff. Four rivers (Mogami, Agano, Jinzu and Tone) were selected in this study. The results showed that the characteristic of river discharge in seasonal variation could be represented and there were overestimated compared with measured one.

  12. Sensory noise predicts divisive reshaping of receptive fields.

    PubMed

    Chalk, Matthew; Masset, Paul; Deneve, Sophie; Gutkin, Boris

    2017-06-01

    In order to respond reliably to specific features of their environment, sensory neurons need to integrate multiple incoming noisy signals. Crucially, they also need to compete for the interpretation of those signals with other neurons representing similar features. The form that this competition should take depends critically on the noise corrupting these signals. In this study we show that for the type of noise commonly observed in sensory systems, whose variance scales with the mean signal, sensory neurons should selectively divide their input signals by their predictions, suppressing ambiguous cues while amplifying others. Any change in the stimulus context alters which inputs are suppressed, leading to a deep dynamic reshaping of neural receptive fields going far beyond simple surround suppression. Paradoxically, these highly variable receptive fields go alongside and are in fact required for an invariant representation of external sensory features. In addition to offering a normative account of context-dependent changes in sensory responses, perceptual inference in the presence of signal-dependent noise accounts for ubiquitous features of sensory neurons such as divisive normalization, gain control and contrast dependent temporal dynamics.

  13. Method and system for detecting a failure or performance degradation in a dynamic system such as a flight vehicle

    NASA Technical Reports Server (NTRS)

    Miller, Robert H. (Inventor); Ribbens, William B. (Inventor)

    2003-01-01

    A method and system for detecting a failure or performance degradation in a dynamic system having sensors for measuring state variables and providing corresponding output signals in response to one or more system input signals are provided. The method includes calculating estimated gains of a filter and selecting an appropriate linear model for processing the output signals based on the input signals. The step of calculating utilizes one or more models of the dynamic system to obtain estimated signals. The method further includes calculating output error residuals based on the output signals and the estimated signals. The method also includes detecting one or more hypothesized failures or performance degradations of a component or subsystem of the dynamic system based on the error residuals. The step of calculating the estimated values is performed optimally with respect to one or more of: noise, uncertainty of parameters of the models and un-modeled dynamics of the dynamic system which may be a flight vehicle or financial market or modeled financial system.

  14. A stock market forecasting model combining two-directional two-dimensional principal component analysis and radial basis function neural network.

    PubMed

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron.

  15. A Stock Market Forecasting Model Combining Two-Directional Two-Dimensional Principal Component Analysis and Radial Basis Function Neural Network

    PubMed Central

    Guo, Zhiqiang; Wang, Huaiqing; Yang, Jie; Miller, David J.

    2015-01-01

    In this paper, we propose and implement a hybrid model combining two-directional two-dimensional principal component analysis ((2D)2PCA) and a Radial Basis Function Neural Network (RBFNN) to forecast stock market behavior. First, 36 stock market technical variables are selected as the input features, and a sliding window is used to obtain the input data of the model. Next, (2D)2PCA is utilized to reduce the dimension of the data and extract its intrinsic features. Finally, an RBFNN accepts the data processed by (2D)2PCA to forecast the next day's stock price or movement. The proposed model is used on the Shanghai stock market index, and the experiments show that the model achieves a good level of fitness. The proposed model is then compared with one that uses the traditional dimension reduction method principal component analysis (PCA) and independent component analysis (ICA). The empirical results show that the proposed model outperforms the PCA-based model, as well as alternative models based on ICA and on the multilayer perceptron. PMID:25849483

  16. Is the place cell a "supple" engram?

    PubMed

    Routtenberg, Aryeh

    2015-06-01

    This short note, which honors Nobelists O'Keefe and the Mosers, asks how the patterning of inputs to a single place cell regulates its firing. Because the combination of inputs to a single CA1 place cell is very large, the generally accepted view is rejected that inputs to a place cell are relatively restricted, near identical repetition upon re-presentation of the environment. The alternative proposed here is that when any 100 excitatory inputs are fired activating a subset combination, which is a large number, selected from the 30,000 synapses, this leads to CA1 cell firing. The selection of the combination of inputs is a very large number it nonetheless leads to the conclusion that even though the same cell dutifully fires when the animal is in an identical location, the inputs that fire the place cell are nonetheless obligatorily non-identical. This CA1 input combinatorial proposal may help us understand the physiological underpinnings of the memory mechanism arising from supple synapses (Routtenberg (2013), Hippocampus 23:202-206). © 2015 Wiley Periodicals, Inc.

  17. Quantity and Quality of Caregivers' Linguistic Input to 18-Month and 3-Year-Old Children Who Are Hard of Hearing.

    PubMed

    Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat

    2015-01-01

    The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.

  18. Deep convolutional neural network based antenna selection in multiple-input multiple-output system

    NASA Astrophysics Data System (ADS)

    Cai, Jiaxin; Li, Yan; Hu, Ying

    2018-03-01

    Antenna selection of wireless communication system has attracted increasing attention due to the challenge of keeping a balance between communication performance and computational complexity in large-scale Multiple-Input MultipleOutput antenna systems. Recently, deep learning based methods have achieved promising performance for large-scale data processing and analysis in many application fields. This paper is the first attempt to introduce the deep learning technique into the field of Multiple-Input Multiple-Output antenna selection in wireless communications. First, the label of attenuation coefficients channel matrix is generated by minimizing the key performance indicator of training antenna systems. Then, a deep convolutional neural network that explicitly exploits the massive latent cues of attenuation coefficients is learned on the training antenna systems. Finally, we use the adopted deep convolutional neural network to classify the channel matrix labels of test antennas and select the optimal antenna subset. Simulation experimental results demonstrate that our method can achieve better performance than the state-of-the-art baselines for data-driven based wireless antenna selection.

  19. Constraint programming based biomarker optimization.

    PubMed

    Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng

    2015-01-01

    Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.

  20. Machinability assessment of commercially pure titanium (CP-Ti) during turning operation: Application potential of GRA method

    NASA Astrophysics Data System (ADS)

    Khan, Akhtar; Maity, Kalipada

    2018-03-01

    This paper explores some of the vital machinability characteristics of commercially pure titanium (CP-Ti) grade 2. Experiments were conducted based on Taguchi’s L9 orthogonal array. The selected material was machined on a heavy duty lathe (Model: HMT NH26) using uncoated carbide inserts in dry cutting environment. The selected inserts were designated by ISO as SNMG 120408 (Model: K313) and manufactured by Kennametal. These inserts were rigidly mounted on a right handed tool holder PSBNR 2020K12. Cutting speed, feed rate and depth of cut were selected as three input variables whereas tool wear (VBc) and surface roughness (Ra) were the major attentions. In order to confirm an appreciable machinability of the work part, an optimal parametric combination was attained with the help of grey relational analysis (GRA) approach. Finally, a mathematical model was developed to exhibit the accuracy and acceptability of the proposed methodology using multiple regression equations. The results indicated that, the suggested model is capable of predicting overall grey relational grade within acceptable range.

  1. A Framework to Guide the Assessment of Human-Machine Systems.

    PubMed

    Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo

    2017-03-01

    We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.

  2. Regulation of spatial selectivity by crossover inhibition.

    PubMed

    Cafaro, Jon; Rieke, Fred

    2013-04-10

    Signals throughout the nervous system diverge into parallel excitatory and inhibitory pathways that later converge on downstream neurons to control their spike output. Converging excitatory and inhibitory synaptic inputs can exhibit a variety of temporal relationships. A common motif is feedforward inhibition, in which an increase (decrease) in excitatory input precedes a corresponding increase (decrease) in inhibitory input. The delay of inhibitory input relative to excitatory input originates from an extra synapse in the circuit shaping inhibitory input. Another common motif is push-pull or "crossover" inhibition, in which increases (decreases) in excitatory input occur together with decreases (increases) in inhibitory input. Primate On midget ganglion cells receive primarily feedforward inhibition and On parasol cells receive primarily crossover inhibition; this difference provides an opportunity to study how each motif shapes the light responses of cell types that play a key role in visual perception. For full-field stimuli, feedforward inhibition abbreviated and attenuated responses of On midget cells, while crossover inhibition, though plentiful, had surprisingly little impact on the responses of On parasol cells. Spatially structured stimuli, however, could cause excitatory and inhibitory inputs to On parasol cells to increase together, adopting a temporal relation very much like that for feedforward inhibition. In this case, inhibitory inputs substantially abbreviated a cell's spike output. Thus inhibitory input shapes the temporal stimulus selectivity of both midget and parasol ganglion cells, but its impact on responses of parasol cells depends strongly on the spatial structure of the light inputs.

  3. Method of Controlling Steering of a Ground Vehicle

    NASA Technical Reports Server (NTRS)

    Guo, Raymond (Inventor); Atluri, Venkata Prasad (Inventor); Bluethmann, William J. (Inventor); Lee, Chunhao J. (Inventor); Vitale, Robert L. (Inventor); Dawson, Andrew D. (Inventor)

    2016-01-01

    A method of controlling steering of a vehicle through setting wheel angles of a plurality of modular electronic corner assemblies (eModules) is provided. The method includes receiving a driving mode selected from a mode selection menu. A position of a steering input device is determined in a master controller. A velocity of the vehicle is determined, in the master controller, when the determined position of the steering input device is near center. A drive mode request corresponding to the selected driving mode to the plurality of steering controllers is transmitted to the master controller. A required steering angle of each of the plurality of eModules is determined, in the master controller, as a function of the determined position of the steering input device, the determined velocity of the vehicle, and the selected first driving mode. The eModules are set to the respective determined steering angles.

  4. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  5. How Methodologic Differences Affect Results of Economic Analyses: A Systematic Review of Interferon Gamma Release Assays for the Diagnosis of LTBI

    PubMed Central

    Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick

    2013-01-01

    Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412

  6. Genomic regions associated with the nitrogen limitation response revealed in a global wheat core collection.

    PubMed

    Bordes, Jacques; Ravel, C; Jaubertie, J P; Duperrier, B; Gardet, O; Heumez, E; Pissavy, A L; Charmet, G; Le Gouis, J; Balfourier, F

    2013-03-01

    Modern wheat (Triticum aestivum L.) varieties in Western Europe have mainly been bred, and selected in conditions where high levels of nitrogen-rich fertilizer are applied. However, high input crop management has greatly increased the risk of nitrates leaching into groundwater with negative impacts on the environment. To investigate wheat nitrogen tolerance characteristics that could be adapted to low input crop management, we supplied 196 accessions of a wheat core collection of old and modern cultivars with high or moderate amounts of nitrogen fertilizer in an experimental network consisting of three sites and 2 years. The main breeding traits were assessed including grain yield and grain protein content. The response to nitrogen level was estimated for grain yield and grain number per m(2) using both the difference and the ratio between performance at the two input levels and the slope of joint regression. A large variability was observed for all the traits studied and the response to nitrogen level. Whole genome association mapping was carried out using 899 molecular markers taking into account the five ancestral group structure of the collection. We identified 54 main regions involving almost all chromosomes that influence yield and its components, plant height, heading date and grain protein concentration. Twenty-three regions, including several genes, spread over 16 chromosomes were involved in the response to nitrogen level. These chromosomal regions may be good candidates to be used in breeding programs to improve the performance of wheat varieties at moderate nitrogen input levels.

  7. Design and Testing of a Variable Pressure Regulator for the Constellation Space Suit

    NASA Technical Reports Server (NTRS)

    Gill, Larry; Campbell, Colin

    2008-01-01

    The next generation space suit requires additional capabilities for controlling and adjusting internal pressure than previous design suits. Next generation suit pressures will range from slight pressure, for astronaut prebreath comfort, to hyperbaric pressure levels for emergency medical treatment. Carleton was awarded a contract in 2008 to design and build a proof of concept bench top demonstrator regulator having five setpoints which are selectable using input electronic signaling. Although the basic regulator architecture is very similar to the existing SOP regulator used in the current EMU, the major difference is the electrical selectivity of multiple setpoints rather than the mechanical On/Off feature found on the SOP regulator. The concept regulator employs a linear actuator stepper motor combination to provide variable compression to a custom design main regulator spring. This concept allows for a continuously adjustable outlet pressures from 8.2 psid (maximum) down to "firm" zero thus effectively allowing it to serve as a shutoff valve. This paper details the regulator design and presents test results on regulation band width, command set point accuracy; slue rate and regulation stability, particularly when the set point is being slued. Projections for a flight configuration version are also offered for performance, architectural layout and weight.

  8. Response-based selection of barley cultivars and legume species for complementarity: Root morphology and exudation in relation to nutrient source.

    PubMed

    Giles, Courtney D; Brown, Lawrie K; Adu, Michael O; Mezeli, Malika M; Sandral, Graeme A; Simpson, Richard J; Wendler, Renate; Shand, Charles A; Menezes-Blackburn, Daniel; Darch, Tegan; Stutter, Marc I; Lumsdon, David G; Zhang, Hao; Blackwell, Martin S A; Wearing, Catherine; Cooper, Patricia; Haygarth, Philip M; George, Timothy S

    2017-02-01

    Phosphorus (P) and nitrogen (N) use efficiency may be improved through increased biodiversity in agroecosystems. Phenotypic variation in plants' response to nutrient deficiency may influence positive complementarity in intercropping systems. A multicomponent screening approach was used to assess the influence of P supply and N source on the phenotypic plasticity of nutrient foraging traits in barley (H. vulgare L.) and legume species. Root morphology and exudation were determined in six plant nutrient treatments. A clear divergence in the response of barley and legumes to the nutrient treatments was observed. Root morphology varied most among legumes, whereas exudate citrate and phytase activity were most variable in barley. Changes in root morphology were minimized in plants provided with ammonium in comparison to nitrate but increased under P deficiency. Exudate phytase activity and pH varied with legume species, whereas citrate efflux, specific root length, and root diameter lengths were more variable among barley cultivars. Three legume species and four barley cultivars were identified as the most responsive to P deficiency and the most contrasting of the cultivars and species tested. Phenotypic response to nutrient availability may be a promising approach for the selection of plant combinations for minimal input cropping systems. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  9. Variable is better than invariable: sparse VSS-NLMS algorithms with application to adaptive MIMO channel estimation.

    PubMed

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics.

  10. Variable Is Better Than Invariable: Sparse VSS-NLMS Algorithms with Application to Adaptive MIMO Channel Estimation

    PubMed Central

    Gui, Guan; Chen, Zhang-xin; Xu, Li; Wan, Qun; Huang, Jiyan; Adachi, Fumiyuki

    2014-01-01

    Channel estimation problem is one of the key technical issues in sparse frequency-selective fading multiple-input multiple-output (MIMO) communication systems using orthogonal frequency division multiplexing (OFDM) scheme. To estimate sparse MIMO channels, sparse invariable step-size normalized least mean square (ISS-NLMS) algorithms were applied to adaptive sparse channel estimation (ACSE). It is well known that step-size is a critical parameter which controls three aspects: algorithm stability, estimation performance, and computational cost. However, traditional methods are vulnerable to cause estimation performance loss because ISS cannot balance the three aspects simultaneously. In this paper, we propose two stable sparse variable step-size NLMS (VSS-NLMS) algorithms to improve the accuracy of MIMO channel estimators. First, ASCE is formulated in MIMO-OFDM systems. Second, different sparse penalties are introduced to VSS-NLMS algorithm for ASCE. In addition, difference between sparse ISS-NLMS algorithms and sparse VSS-NLMS ones is explained and their lower bounds are also derived. At last, to verify the effectiveness of the proposed algorithms for ASCE, several selected simulation results are shown to prove that the proposed sparse VSS-NLMS algorithms can achieve better estimation performance than the conventional methods via mean square error (MSE) and bit error rate (BER) metrics. PMID:25089286

  11. The Performance of Short-Term Heart Rate Variability in the Detection of Congestive Heart Failure

    PubMed Central

    Barros, Allan Kardec; Ohnishi, Noboru

    2016-01-01

    Congestive heart failure (CHF) is a cardiac disease associated with the decreasing capacity of the cardiac output. It has been shown that the CHF is the main cause of the cardiac death around the world. Some works proposed to discriminate CHF subjects from healthy subjects using either electrocardiogram (ECG) or heart rate variability (HRV) from long-term recordings. In this work, we propose an alternative framework to discriminate CHF from healthy subjects by using HRV short-term intervals based on 256 RR continuous samples. Our framework uses a matching pursuit algorithm based on Gabor functions. From the selected Gabor functions, we derived a set of features that are inputted into a hybrid framework which uses a genetic algorithm and k-nearest neighbour classifier to select a subset of features that has the best classification performance. The performance of the framework is analyzed using both Fantasia and CHF database from Physionet archives which are, respectively, composed of 40 healthy volunteers and 29 subjects. From a set of nonstandard 16 features, the proposed framework reaches an overall accuracy of 100% with five features. Our results suggest that the application of hybrid frameworks whose classifier algorithms are based on genetic algorithms has outperformed well-known classifier methods. PMID:27891509

  12. Quantifying Uncertainties from Presence Data Sampling Methods for Species Distribution Modeling: Focused on Vegetation.

    NASA Astrophysics Data System (ADS)

    Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.

    2016-12-01

    The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.

  13. How model and input uncertainty impact maize yield simulations in West Africa

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  14. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    PubMed

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Building accurate historic and future climate MEPDG input files for Louisiana DOTD : tech summary.

    DOT National Transportation Integrated Search

    2017-02-01

    The new pavement design process (originally MEPDG, then DARWin-ME, and now Pavement ME Design) requires two types : of inputs to infl uence the prediction of pavement distress for a selected set of pavement materials and structure. One input is : tra...

  16. Bacterial Microcolonies in Gel Beads for High-Throughput Screening of Libraries in Synthetic Biology.

    PubMed

    Duarte, José M; Barbier, Içvara; Schaerli, Yolanda

    2017-11-17

    Synthetic biologists increasingly rely on directed evolution to optimize engineered biological systems. Applying an appropriate screening or selection method for identifying the potentially rare library members with the desired properties is a crucial step for success in these experiments. Special challenges include substantial cell-to-cell variability and the requirement to check multiple states (e.g., being ON or OFF depending on the input). Here, we present a high-throughput screening method that addresses these challenges. First, we encapsulate single bacteria into microfluidic agarose gel beads. After incubation, they harbor monoclonal bacterial microcolonies (e.g., expressing a synthetic construct) and can be sorted according their fluorescence by fluorescence activated cell sorting (FACS). We determine enrichment rates and demonstrate that we can measure the average fluorescent signals of microcolonies containing phenotypically heterogeneous cells, obviating the problem of cell-to-cell variability. Finally, we apply this method to sort a pBAD promoter library at ON and OFF states.

  17. Kubios HRV--heart rate variability analysis software.

    PubMed

    Tarvainen, Mika P; Niskanen, Juha-Pekka; Lipponen, Jukka A; Ranta-Aho, Perttu O; Karjalainen, Pasi A

    2014-01-01

    Kubios HRV is an advanced and easy to use software for heart rate variability (HRV) analysis. The software supports several input data formats for electrocardiogram (ECG) data and beat-to-beat RR interval data. It includes an adaptive QRS detection algorithm and tools for artifact correction, trend removal and analysis sample selection. The software computes all the commonly used time-domain and frequency-domain HRV parameters and several nonlinear parameters. There are several adjustable analysis settings through which the analysis methods can be optimized for different data. The ECG derived respiratory frequency is also computed, which is important for reliable interpretation of the analysis results. The analysis results can be saved as an ASCII text file (easy to import into MS Excel or SPSS), Matlab MAT-file, or as a PDF report. The software is easy to use through its compact graphical user interface. The software is available free of charge for Windows and Linux operating systems at http://kubios.uef.fi. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  18. State funding for local public health: observations from six case studies.

    PubMed

    Potter, Margaret A; Fitzpatrick, Tiffany

    2007-01-01

    The purpose of this study is to describe state funding of local public health within the context of state public health system types. These types are based on administrative relationships, legal structures, and relative proportion of state funding in local public health budgets. We selected six states representing various types and geographic regions. A case study for each state summarized available information and was validated by state public health officials. An analysis of the case studies reveals that the variability of state public health systems--even within a given type--is matched by variability in approaches to funding local public health. Nevertheless, some meaningful associations appear. For example, higher proportions of state funding occur along with higher levels of state oversight and the existence of local service mandates in state law. These associations suggest topics for future research on public health financing in relation to local accountability, local input to state priority-setting, mandated local services, and the absence of state funds for public health services in some local jurisdictions.

  19. Finding stability regions for preserving efficiency classification of variable returns to scale technology in data envelopment analysis

    NASA Astrophysics Data System (ADS)

    Zamani, P.; Borzouei, M.

    2016-12-01

    This paper addresses issue of sensitivity of efficiency classification of variable returns to scale (VRS) technology for enhancing the credibility of data envelopment analysis (DEA) results in practical applications when an additional decision making unit (DMU) needs to be added to the set being considered. It also develops a structured approach to assisting practitioners in making an appropriate selection of variation range for inputs and outputs of additional DMU so that this DMU be efficient and the efficiency classification of VRS technology remains unchanged. This stability region is simply specified by the concept of defining hyperplanes of production possibility set of VRS technology and the corresponding halfspaces. Furthermore, this study determines a stability region for the additional DMU within which, in addition to efficiency classification, the efficiency score of a specific inefficient DMU is preserved and also using a simulation method, a region in which some specific efficient DMUs become inefficient is provided.

  20. Optimizing landslide susceptibility zonation: Effects of DEM spatial resolution and slope unit delineation on logistic regression models

    NASA Astrophysics Data System (ADS)

    Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.

    2018-01-01

    We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.

  1. Biome-specific scaling of ocean productivity, temperature, and carbon export efficiency

    NASA Astrophysics Data System (ADS)

    Britten, Gregory L.; Primeau, François W.

    2016-05-01

    Mass conservation and metabolic theory place constraints on how marine export production (EP) scales with net primary productivity (NPP) and sea surface temperature (SST); however, little is empirically known about how these relationships vary across ecologically distinct ocean biomes. Here we compiled in situ observations of EP, NPP, and SST and used statistical model selection theory to demonstrate significant biome-specific scaling relationships among these variables. Multiple statistically similar models yield a threefold variation in the globally integrated carbon flux (~4-12 Pg C yr-1) when applied to climatological satellite-derived NPP and SST. Simulated NPP and SST input variables from a 4×CO2 climate model experiment further show that biome-specific scaling alters the predicted response of EP to simulated increases of atmospheric CO2. These results highlight the need to better understand distinct pathways of carbon export across unique ecological biomes and may help guide proposed efforts for in situ observations of the ocean carbon cycle.

  2. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  3. A Comparative Analysis of Machine Learning with WorldView-2 Pan-Sharpened Imagery for Tea Crop Mapping

    PubMed Central

    Chuang, Yung-Chung Matt; Shiu, Yi-Shiang

    2016-01-01

    Tea is an important but vulnerable economic crop in East Asia, highly impacted by climate change. This study attempts to interpret tea land use/land cover (LULC) using very high resolution WorldView-2 imagery of central Taiwan with both pixel and object-based approaches. A total of 80 variables derived from each WorldView-2 band with pan-sharpening, standardization, principal components and gray level co-occurrence matrix (GLCM) texture indices transformation, were set as the input variables. For pixel-based image analysis (PBIA), 34 variables were selected, including seven principal components, 21 GLCM texture indices and six original WorldView-2 bands. Results showed that support vector machine (SVM) had the highest tea crop classification accuracy (OA = 84.70% and KIA = 0.690), followed by random forest (RF), maximum likelihood algorithm (ML), and logistic regression analysis (LR). However, the ML classifier achieved the highest classification accuracy (OA = 96.04% and KIA = 0.887) in object-based image analysis (OBIA) using only six variables. The contribution of this study is to create a new framework for accurately identifying tea crops in a subtropical region with real-time high-resolution WorldView-2 imagery without field survey, which could further aid agriculture land management and a sustainable agricultural product supply. PMID:27128915

  4. A Comparative Analysis of Machine Learning with WorldView-2 Pan-Sharpened Imagery for Tea Crop Mapping.

    PubMed

    Chuang, Yung-Chung Matt; Shiu, Yi-Shiang

    2016-04-26

    Tea is an important but vulnerable economic crop in East Asia, highly impacted by climate change. This study attempts to interpret tea land use/land cover (LULC) using very high resolution WorldView-2 imagery of central Taiwan with both pixel and object-based approaches. A total of 80 variables derived from each WorldView-2 band with pan-sharpening, standardization, principal components and gray level co-occurrence matrix (GLCM) texture indices transformation, were set as the input variables. For pixel-based image analysis (PBIA), 34 variables were selected, including seven principal components, 21 GLCM texture indices and six original WorldView-2 bands. Results showed that support vector machine (SVM) had the highest tea crop classification accuracy (OA = 84.70% and KIA = 0.690), followed by random forest (RF), maximum likelihood algorithm (ML), and logistic regression analysis (LR). However, the ML classifier achieved the highest classification accuracy (OA = 96.04% and KIA = 0.887) in object-based image analysis (OBIA) using only six variables. The contribution of this study is to create a new framework for accurately identifying tea crops in a subtropical region with real-time high-resolution WorldView-2 imagery without field survey, which could further aid agriculture land management and a sustainable agricultural product supply.

  5. Statistical Study to Evaluate the Effect of Processing Variables on Shrinkage Incidence During Solidification of Nodular Cast Irons

    NASA Astrophysics Data System (ADS)

    Gutiérrez, J. M.; Natxiondo, A.; Nieves, J.; Zabala, A.; Sertucha, J.

    2017-04-01

    The study of shrinkage incidence variations in nodular cast irons is an important aspect of manufacturing processes. These variations change the feeding requirements on castings and the optimization of risers' size is consequently affected when avoiding the formation of shrinkage defects. The effect of a number of processing variables on the shrinkage size has been studied using a layout specifically designed for this purpose. The β parameter has been defined as the relative volume reduction from the pouring temperature up to the room temperature. It is observed that shrinkage size and β decrease as effective carbon content increases and when inoculant is added in the pouring stream. A similar effect is found when the parameters selected from cooling curves show high graphite nucleation during solidification of cast irons for a given inoculation level. Pearson statistical analysis has been used to analyze the correlations among all involved variables and a group of Bayesian networks have been subsequently built so as to get the best accurate model for predicting β as a function of the input processing variables. The developed models can be used in foundry plants to study the shrinkage incidence variations in the manufacturing process and to optimize the related costs.

  6. Nonpolluting automobiles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoolboom, G.J.; Szabados, B.

    The advantages/disadvantages of energy storage devices, which can provide nonpolluting automobile systems are discussed. Four types of storage devices are identified: electrochemical (batteries); hydrogen; electromechanical (flywheels); and molten salt heat storage. A high-speed flywheel with a small permanent magnet motor/generator has more advantages than any of the other systems and might become a real competitor to the internal combustion engine. A flywheel/motor/generator system for automobiles now becomes practical, because of the technological advances in materials, bearings and solid state control circuits. The motor of choice is the squirrel cage induction motor, specially designed for automobile applications. The preferred controller formore » the induction motor is a forced commutated cycloconverter, which transforms a variable voltage/variable frequency source into a controlled variable-voltage/variable-frequency supply. A modulation strategy of the cycloconverter elements is selected to maintain a unity input displacement factor (power factor) under all conditions of loads voltages and frequencies. The system is similar to that of the existing automobile, if only one motor is used: master controller-controller-motor-gears (fixed)-differential-wheels. In the case of two motors, the mechanical differential is replaced by an electric one: master controller-controller-motor-gears (fixed)-wheel. A four-wheel drive vehicle is obtained when four motors with their own controllers are used. 24 refs.« less

  7. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    PubMed Central

    Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon

    2017-01-01

    This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025

  8. Investigation of Spray Cooling Schemes for Dynamic Thermal Management

    NASA Astrophysics Data System (ADS)

    Yata, Vishnu Vardhan Reddy

    This study aims to investigate variable flow and intermittent flow spray cooling characteristics for efficiency improvement in active two-phase thermal management systems. Variable flow spray cooling scheme requires control of pump input voltage (or speed), while intermittent flow spray cooling scheme requires control of solenoid valve duty cycle and frequency. Several testing scenarios representing dynamic heat load conditions are implemented to characterize the overall performance of variable flow and intermittent flow spray cooling cases in comparison with the reference, steady flow spray cooling case with constant flowrate, continuous spray cooling. Tests are conducted on a small-scale, closed loop spray cooling system featuring a pressure atomized spray nozzle. HFE-7100 dielectric liquid is selected as the working fluid. Two types of test samples are prepared on 10 mm x 10 mm x 2 mm copper substrates with matching size thick film resistors attached onto the opposite side, to generate heat and simulate high heat flux electronic devices. The test samples include: (i) plain, smooth surface, and (ii) microporous surface featuring 100 ?m thick copper-based coating prepared by dual stage electroplating technique. Experimental conditions involve HFE-7100 at atmospheric pressure and 30°C and 10°C subcooling. Steady flow spray cooling tests are conducted at flow rates of 2-5 ml/cm2.s, by controlling the heat flux in increasing steps, and recording the corresponding steady-state temperatures to obtain cooling curves in the form of surface superheat vs. heat flux. Variable flow and intermittent flow spray cooling tests are done at selected flowrate and subcooling conditions to investigate the effects of dynamic flow conditions on maintaining the target surface temperatures defined based on reference steady flow spray cooling performance.

  9. A novel approach for prediction of tacrolimus blood concentration in liver transplantation patients in the intensive care unit through support vector regression.

    PubMed

    Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan

    2007-01-01

    Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.

  10. Application of SPARROW modeling to understanding contaminant fate and transport from uplands to streams

    USGS Publications Warehouse

    Ator, Scott; Garcia, Ana Maria.

    2016-01-01

    Understanding spatial variability in contaminant fate and transport is critical to efficient regional water-quality restoration. An approach to capitalize on previously calibrated spatially referenced regression (SPARROW) models to improve the understanding of contaminant fate and transport was developed and applied to the case of nitrogen in the 166,000 km2 Chesapeake Bay watershed. A continuous function of four hydrogeologic, soil, and other landscape properties significant (α = 0.10) to nitrogen transport from uplands to streams was evaluated and compared among each of the more than 80,000 individual catchments (mean area, 2.1 km2) in the watershed. Budgets (including inputs, losses or net change in storage in uplands and stream corridors, and delivery to tidal waters) were also estimated for nitrogen applied to these catchments from selected upland sources. Most (81%) of such inputs are removed, retained, or otherwise processed in uplands rather than transported to surface waters. Combining SPARROW results with previous budget estimates suggests 55% of this processing is attributable to denitrification, 23% to crop or timber harvest, and 6% to volatilization. Remaining upland inputs represent a net annual increase in landscape storage in soils or biomass exceeding 10 kg per hectare in some areas. Such insights are important for planning watershed restoration and for improving future watershed models.

  11. Sympathovagal imbalance in hyperthyroidism.

    PubMed

    Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H

    2001-07-01

    We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.

  12. Research and Development in the Computer and Information Sciences. Volume 1, Information Acquisition, Sensing, and Input: A Selective Literature Review.

    ERIC Educational Resources Information Center

    Stevens, Mary Elizabeth

    The series, of which this is the initial report, is intended to give a selective overview of research and development efforts and requirements in the computer and information sciences. The operations of information acquisition, sensing, and input to information processing systems are considered in generalized terms. Specific topics include but are…

  13. The Interaction of Lexical Semantics and Cohort Competition in Spoken Word Recognition: An fMRI Study

    ERIC Educational Resources Information Center

    Zhuang, Jie; Randall, Billi; Stamatakis, Emmanuel A.; Marslen-Wilson, William D.; Tyler, Lorraine K.

    2011-01-01

    Spoken word recognition involves the activation of multiple word candidates on the basis of the initial speech input--the "cohort"--and selection among these competitors. Selection may be driven primarily by bottom-up acoustic-phonetic inputs or it may be modulated by other aspects of lexical representation, such as a word's meaning…

  14. Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...

  15. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    USDA-ARS?s Scientific Manuscript database

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  16. Speaker Invariance for Phonetic Information: an fMRI Investigation

    PubMed Central

    Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.

    2012-01-01

    The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714

  17. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    NASA Astrophysics Data System (ADS)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  18. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  19. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  20. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  1. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  2. Group interaction and flight crew performance

    NASA Technical Reports Server (NTRS)

    Foushee, H. Clayton; Helmreich, Robert L.

    1988-01-01

    The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.

  3. Mode-selective mapping and control of vectorial nonlinear-optical processes in multimode photonic-crystal fibers.

    PubMed

    Hu, Ming-Lie; Wang, Ching-Yue; Song, You-Jian; Li, Yan-Feng; Chai, Lu; Serebryannikov, Evgenii; Zheltikov, Aleksei

    2006-02-06

    We demonstrate an experimental technique that allows a mapping of vectorial nonlinear-optical processes in multimode photonic-crystal fibers (PCFs). Spatial and polarization modes of PCFs are selectively excited in this technique by varying the tilt angle of the input beam and rotating the polarization of the input field. Intensity spectra of the PCF output plotted as a function of the input field power and polarization then yield mode-resolved maps of nonlinear-optical interactions in multimode PCFs, facilitating the analysis and control of nonlinear-optical transformations of ultrashort laser pulses in such fibers.

  4. Somatosensory Anticipatory Alpha Activity Increases to Suppress Distracting Input

    ERIC Educational Resources Information Center

    Haegens, Saskia; Luther, Lisa; Jensen, Ole

    2012-01-01

    Effective processing of sensory input in daily life requires attentional selection and amplification of relevant input and, just as importantly, attenuation of irrelevant information. It has been proposed that top-down modulation of oscillatory alpha band activity (8-14 Hz) serves to allocate resources to various regions, depending on task…

  5. Visual input enhances selective speech envelope tracking in auditory cortex at a "cocktail party".

    PubMed

    Zion Golumbic, Elana; Cogan, Gregory B; Schroeder, Charles E; Poeppel, David

    2013-01-23

    Our ability to selectively attend to one auditory signal amid competing input streams, epitomized by the "Cocktail Party" problem, continues to stimulate research from various approaches. How this demanding perceptual feat is achieved from a neural systems perspective remains unclear and controversial. It is well established that neural responses to attended stimuli are enhanced compared with responses to ignored ones, but responses to ignored stimuli are nonetheless highly significant, leading to interference in performance. We investigated whether congruent visual input of an attended speaker enhances cortical selectivity in auditory cortex, leading to diminished representation of ignored stimuli. We recorded magnetoencephalographic signals from human participants as they attended to segments of natural continuous speech. Using two complementary methods of quantifying the neural response to speech, we found that viewing a speaker's face enhances the capacity of auditory cortex to track the temporal speech envelope of that speaker. This mechanism was most effective in a Cocktail Party setting, promoting preferential tracking of the attended speaker, whereas without visual input no significant attentional modulation was observed. These neurophysiological results underscore the importance of visual input in resolving perceptual ambiguity in a noisy environment. Since visual cues in speech precede the associated auditory signals, they likely serve a predictive role in facilitating auditory processing of speech, perhaps by directing attentional resources to appropriate points in time when to-be-attended acoustic input is expected to arrive.

  6. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  7. Simulating maize yield and biomass with spatial variability of soil field capacity

    USDA-ARS?s Scientific Manuscript database

    Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...

  8. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  9. Orientation selectivity of synaptic input to neurons in mouse and cat primary visual cortex.

    PubMed

    Tan, Andrew Y Y; Brown, Brandon D; Scholl, Benjamin; Mohanty, Deepankar; Priebe, Nicholas J

    2011-08-24

    Primary visual cortex (V1) is the site at which orientation selectivity emerges in mammals: visual thalamus afferents to V1 respond equally to all stimulus orientations, whereas their target V1 neurons respond selectively to stimulus orientation. The emergence of orientation selectivity in V1 has long served as a model for investigating cortical computation. Recent evidence for orientation selectivity in mouse V1 opens cortical computation to dissection by genetic and imaging tools, but also raises two essential questions: (1) How does orientation selectivity in mouse V1 neurons compare with that in previously described species? (2) What is the synaptic basis for orientation selectivity in mouse V1? A comparison of orientation selectivity in mouse and in cat, where such measures have traditionally been made, reveals that orientation selectivity in mouse V1 is weaker than in cat V1, but that spike threshold plays a similar role in narrowing selectivity between membrane potential and spike rate. To uncover the synaptic basis for orientation selectivity, we made whole-cell recordings in vivo from mouse V1 neurons, comparing neuronal input selectivity-based on membrane potential, synaptic excitation, and synaptic inhibition-to output selectivity based on spiking. We found that a neuron's excitatory and inhibitory inputs are selective for the same stimulus orientations as is its membrane potential response, and that inhibitory selectivity is not broader than excitatory selectivity. Inhibition has different dynamics than excitation, adapting more rapidly. In neurons with temporally modulated responses, the timing of excitation and inhibition was different in mice and cats.

  10. Orientation Selectivity of Synaptic Input to Neurons in Mouse and Cat Primary Visual Cortex

    PubMed Central

    Tan (陈勇毅), Andrew Y. Y.; Brown, Brandon D.; Scholl, Benjamin; Mohanty, Deepankar; Priebe, Nicholas J.

    2011-01-01

    Primary visual cortex (V1) is the site at which orientation selectivity emerges in mammals: visual thalamus afferents to V1 respond equally to all stimulus orientations whereas their target V1 neurons respond selectively to stimulus orientation. The emergence of orientation selectivity in V1 has long served as a model for investigating cortical computation. Recent evidence for orientation selectivity in mouse V1 opens cortical computation to dissection by genetic and imaging tools, but also raises two essential questions: 1) how does orientation selectivity in mouse V1 neurons compare with that in previously described species? 2) what is the synaptic basis for orientation selectivity in mouse V1? A comparison of orientation selectivity in mouse and in cat, where such measures have traditionally been made, reveals that orientation selectivity in mouse V1 is weaker than in cat V1, but that spike threshold plays a similar role in narrowing selectivity between membrane potential and spike rate. To uncover the synaptic basis for orientation selectivity, we made whole-cell recordings in vivo from mouse V1 neurons, comparing neuronal input selectivity - based on membrane potential, synaptic excitation, and synaptic inhibition - to output selectivity based on spiking. We found that a neuron's excitatory and inhibitory inputs are selective for the same stimulus orientations as is its membrane potential response, and that inhibitory selectivity is not broader than excitatory selectivity. Inhibition has different dynamics than excitation, adapting more rapidly. In neurons with temporally modulated responses, the timing of excitation and inhibition was different in mice and cats. PMID:21865476

  11. Multilayer perceptron neural network-based approach for modeling phycocyanin pigment concentrations: case study from lower Charles River buoy, USA.

    PubMed

    Heddam, Salim

    2016-09-01

    This paper proposes multilayer perceptron neural network (MLPNN) to predict phycocyanin (PC) pigment using water quality variables as predictor. In the proposed model, four water quality variables that are water temperature, dissolved oxygen, pH, and specific conductance were selected as the inputs for the MLPNN model, and the PC as the output. To demonstrate the capability and the usefulness of the MLPNN model, a total of 15,849 data measured at 15-min (15 min) intervals of time are used for the development of the model. The data are collected at the lower Charles River buoy, and available from the US Environmental Protection Agency (USEPA). For comparison purposes, a multiple linear regression (MLR) model that was frequently used for predicting water quality variables in previous studies is also built. The performances of the models are evaluated using a set of widely used statistical indices. The performance of the MLPNN and MLR models is compared with the measured data. The obtained results show that (i) the all proposed MLPNN models are more accurate than the MLR models and (ii) the results obtained are very promising and encouraging for the development of phycocyanin-predictive models.

  12. New approach for reduction of diesel consumption by comparing different mining haulage configurations.

    PubMed

    Rodovalho, Edmo da Cunha; Lima, Hernani Mota; de Tomi, Giorgio

    2016-05-01

    The mining operations of loading and haulage have an energy source that is highly dependent on fossil fuels. In mining companies that select trucks for haulage, this input is the main component of mining costs. How can the impact of the operational aspects on the diesel consumption of haulage operations in surface mines be assessed? There are many studies relating the consumption of fuel trucks to several variables, but a methodology that prioritizes higher-impact variables under each specific condition is not available. Generic models may not apply to all operational settings presented in the mining industry. This study aims to create a method of analysis, identification, and prioritization of variables related to fuel consumption of haul trucks in open pit mines. For this purpose, statistical analysis techniques and mathematical modelling tools using multiple linear regressions will be applied. The model is shown to be suitable because the results generate a good description of the fuel consumption behaviour. In the practical application of the method, the reduction of diesel consumption reached 10%. The implementation requires no large-scale investments or very long deadlines and can be applied to mining haulage operations in other settings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A System-Oriented Approach for the Optimal Control of Process Chains under Stochastic Influences

    NASA Astrophysics Data System (ADS)

    Senn, Melanie; Schäfer, Julian; Pollak, Jürgen; Link, Norbert

    2011-09-01

    Process chains in manufacturing consist of multiple connected processes in terms of dynamic systems. The properties of a product passing through such a process chain are influenced by the transformation of each single process. There exist various methods for the control of individual processes, such as classical state controllers from cybernetics or function mapping approaches realized by statistical learning. These controllers ensure that a desired state is obtained at process end despite of variations in the input and disturbances. The interactions between the single processes are thereby neglected, but play an important role in the optimization of the entire process chain. We divide the overall optimization into two phases: (1) the solution of the optimization problem by Dynamic Programming to find the optimal control variable values for each process for any encountered end state of its predecessor and (2) the application of the optimal control variables at runtime for the detected initial process state. The optimization problem is solved by selecting adequate control variables for each process in the chain backwards based on predefined quality requirements for the final product. For the demonstration of the proposed concept, we have chosen a process chain from sheet metal manufacturing with simplified transformation functions.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, J.; Moon, T.J.; Howell, J.R.

    This paper presents an analysis of the heat transfer occurring during an in-situ curing process for which infrared energy is provided on the surface of polymer composite during winding. The material system is Hercules prepreg AS4/3501-6. Thermoset composites have an exothermic chemical reaction during the curing process. An Eulerian thermochemical model is developed for the heat transfer analysis of helical winding. The model incorporates heat generation due to the chemical reaction. Several assumptions are made leading to a two-dimensional, thermochemical model. For simplicity, 360{degree} heating around the mandrel is considered. In order to generate the appropriate process windows, the developedmore » heat transfer model is combined with a simple winding time model. The process windows allow for a proper selection of process variables such as infrared energy input and winding velocity to give a desired end-product state. Steady-state temperatures are found for each combination of the process variables. A regression analysis is carried out to relate the process variables to the resulting steady-state temperatures. Using regression equations, process windows for a wide range of cylinder diameters are found. A general procedure to find process windows for Hercules AS4/3501-6 prepreg tape is coded in a FORTRAN program.« less

  15. Ecological suitability modeling for anthrax in the Kruger National Park, South Africa.

    PubMed

    Steenkamp, Pieter Johan; van Heerden, Henriette; van Schalkwyk, Ockert Louis

    2018-01-01

    The spores of the soil-borne bacterium, Bacillus anthracis, which causes anthrax are highly resistant to adverse environmental conditions. Under ideal conditions, anthrax spores can survive for many years in the soil. Anthrax is known to be endemic in the northern part of Kruger National Park (KNP) in South Africa (SA), with occasional epidemics spreading southward. The aim of this study was to identify and map areas that are ecologically suitable for the harboring of B. anthracis spores within the KNP. Anthrax surveillance data and selected environmental variables were used as inputs to the maximum entropy (Maxent) species distribution modeling method. Anthrax positive carcasses from 1988-2011 in KNP (n = 597) and a total of 40 environmental variables were used to predict and evaluate their relative contribution to suitability for anthrax occurrence in KNP. The environmental variables that contributed the most to the occurrence of anthrax were soil type, normalized difference vegetation index (NDVI) and precipitation. Apart from the endemic Pafuri region, several other areas within KNP were classified as ecologically suitable. The outputs of this study could guide future surveillance efforts to focus on predicted suitable areas for anthrax, since the KNP currently uses passive surveillance to detect anthrax outbreaks.

  16. Temporal dynamics of hot desert microbial communities reveal structural and functional responses to water input

    PubMed Central

    Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P.; Jansson, Janet K.; Hopkins, David W.; Aspray, Thomas J.; Seely, Mary; Trindade, Marla I.; Cowan, Don A.

    2016-01-01

    The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall. PMID:27680878

  17. Temporal dynamics of hot desert microbial communities reveal structural and functional responses to water input.

    PubMed

    Armstrong, Alacia; Valverde, Angel; Ramond, Jean-Baptiste; Makhalanyane, Thulani P; Jansson, Janet K; Hopkins, David W; Aspray, Thomas J; Seely, Mary; Trindade, Marla I; Cowan, Don A

    2016-09-29

    The temporal dynamics of desert soil microbial communities are poorly understood. Given the implications for ecosystem functioning under a global change scenario, a better understanding of desert microbial community stability is crucial. Here, we sampled soils in the central Namib Desert on sixteen different occasions over a one-year period. Using Illumina-based amplicon sequencing of the 16S rRNA gene, we found that α-diversity (richness) was more variable at a given sampling date (spatial variability) than over the course of one year (temporal variability). Community composition remained essentially unchanged across the first 10 months, indicating that spatial sampling might be more important than temporal sampling when assessing β-diversity patterns in desert soils. However, a major shift in microbial community composition was found following a single precipitation event. This shift in composition was associated with a rapid increase in CO 2 respiration and productivity, supporting the view that desert soil microbial communities respond rapidly to re-wetting and that this response may be the result of both taxon-specific selection and changes in the availability or accessibility of organic substrates. Recovery to quasi pre-disturbance community composition was achieved within one month after rainfall.

  18. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  19. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).

  20. A novel framework to simulating non-stationary, non-linear, non-Normal hydrological time series using Markov Switching Autoregressive Models

    NASA Astrophysics Data System (ADS)

    Birkel, C.; Paroli, R.; Spezia, L.; Tetzlaff, D.; Soulsby, C.

    2012-12-01

    In this paper we present a novel model framework using the class of Markov Switching Autoregressive Models (MSARMs) to examine catchments as complex stochastic systems that exhibit non-stationary, non-linear and non-Normal rainfall-runoff and solute dynamics. Hereby, MSARMs are pairs of stochastic processes, one observed and one unobserved, or hidden. We model the unobserved process as a finite state Markov chain and assume that the observed process, given the hidden Markov chain, is conditionally autoregressive, which means that the current observation depends on its recent past (system memory). The model is fully embedded in a Bayesian analysis based on Markov Chain Monte Carlo (MCMC) algorithms for model selection and uncertainty assessment. Hereby, the autoregressive order and the dimension of the hidden Markov chain state-space are essentially self-selected. The hidden states of the Markov chain represent unobserved levels of variability in the observed process that may result from complex interactions of hydroclimatic variability on the one hand and catchment characteristics affecting water and solute storage on the other. To deal with non-stationarity, additional meteorological and hydrological time series along with a periodic component can be included in the MSARMs as covariates. This extension allows identification of potential underlying drivers of temporal rainfall-runoff and solute dynamics. We applied the MSAR model framework to streamflow and conservative tracer (deuterium and oxygen-18) time series from an intensively monitored 2.3 km2 experimental catchment in eastern Scotland. Statistical time series analysis, in the form of MSARMs, suggested that the streamflow and isotope tracer time series are not controlled by simple linear rules. MSARMs showed that the dependence of current observations on past inputs observed by transport models often in form of the long-tailing of travel time and residence time distributions can be efficiently explained by non-stationarity either of the system input (climatic variability) and/or the complexity of catchment storage characteristics. The statistical model is also capable of reproducing short (event) and longer-term (inter-event) and wet and dry dynamical "hydrological states". These reflect the non-linear transport mechanisms of flow pathways induced by transient climatic and hydrological variables and modified by catchment characteristics. We conclude that MSARMs are a powerful tool to analyze the temporal dynamics of hydrological data, allowing for explicit integration of non-stationary, non-linear and non-Normal characteristics.

  1. Sensitivity Analysis of Stability Problems of Steel Structures using Shell Finite Elements and Nonlinear Computation Methods

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk; Kala, Jiří

    2011-09-01

    The main focus of the paper is the analysis of the influence of residual stress on the ultimate limit state of a hot-rolled member in compression. The member was modelled using thin-walled elements of type SHELL 181 and meshed in the programme ANSYS. Geometrical and material non-linear analysis was used. The influence of residual stress was studied using variance-based sensitivity analysis. In order to obtain more general results, the non-dimensional slenderness was selected as a study parameter. Comparison of the influence of the residual stress with the influence of other dominant imperfections is illustrated in the conclusion of the paper. All input random variables were considered according to results of experimental research.

  2. Turbine Engine Variable Cycle Selection Program Summary.

    DTIC Science & Technology

    1977-04-01

    en Dee. Entered) 1 , ~~~~~~~~ DOCUMENTATION PAGE BEFORE COMPLETING F 4~ 4 I. ~~~~~~~~~~~~~~~~~~~~~ 2 . GOVT ACCESSION NO. 3- ~$ 1 .~~IENT ’S CATALOG...CONT ENTS SECTION PAGE 1 . INTRODUCTION 1 2 . PROGRAM APPROACH 2 3. FIGHTER ENGINE/AIRFRAME EVALUATION PROCEDURE 8 3.1 Input 9 3.2 Computation 9 3.3...Range of !‘:!~ nme t ric Mission ~)ash V a 1 ~1es 7 28 T5 ar a ~n ct r ic Eng ine C’ 1 — L r a c t o r i s t i c s 28 2 9 Thrust Lapse Comparisons 2

  3. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    PubMed

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.

  4. Pilot study of a novel tool for input-free automated identification of transition zone prostate tumors using T2- and diffusion-weighted signal and textural features.

    PubMed

    Stember, Joseph N; Deng, Fang-Ming; Taneja, Samir S; Rosenkrantz, Andrew B

    2014-08-01

    To present results of a pilot study to develop software that identifies regions suspicious for prostate transition zone (TZ) tumor, free of user input. Eight patients with TZ tumors were used to develop the model by training a Naïve Bayes classifier to detect tumors based on selection of most accurate predictors among various signal and textural features on T2-weighted imaging (T2WI) and apparent diffusion coefficient (ADC) maps. Features tested as inputs were: average signal, signal standard deviation, energy, contrast, correlation, homogeneity and entropy (all defined on T2WI); and average ADC. A forward selection scheme was used on the remaining 20% of training set supervoxels to identify important inputs. The trained model was tested on a different set of ten patients, half with TZ tumors. In training cases, the software tiled the TZ with 4 × 4-voxel "supervoxels," 80% of which were used to train the classifier. Each of 100 iterations selected T2WI energy and average ADC, which therefore were deemed the optimal model input. The two-feature model was applied blindly to the separate set of test patients, again without operator input of suspicious foci. The software correctly predicted presence or absence of TZ tumor in all test patients. Furthermore, locations of predicted tumors corresponded spatially with locations of biopsies that had confirmed their presence. Preliminary findings suggest that this tool has potential to accurately predict TZ tumor presence and location, without operator input. © 2013 Wiley Periodicals, Inc.

  5. Control theory analysis of a three-axis VTOL flight director. M.S. Thesis - Pennsylvania State Univ.

    NASA Technical Reports Server (NTRS)

    Niessen, F. R.

    1971-01-01

    A control theory analysis of a VTOL flight director and the results of a fixed-based simulator evaluation of the flight-director commands are discussed. The VTOL configuration selected for this study is a helicopter-type VTOL which controls the direction of the thrust vector by means of vehicle-attitude changes and, furthermore, employs high-gain attitude stabilization. This configuration is the same as one which was simulated in actual instrument flight tests with a variable stability helicopter. Stability analyses are made for each of the flight-director commands, assuming a single input-output, multi-loop system model for each control axis. The analyses proceed from the inner-loops to the outer-loops, using an analytical pilot model selected on the basis of the innermost-loop dynamics. The time response of the analytical model of the system is primarily used to adjust system gains, while root locus plots are used to identify dominant modes and mode interactions.

  6. Genetic Programming Transforms in Linear Regression Situations

    NASA Astrophysics Data System (ADS)

    Castillo, Flor; Kordon, Arthur; Villa, Carlos

    The chapter summarizes the use of Genetic Programming (GP) inMultiple Linear Regression (MLR) to address multicollinearity and Lack of Fit (LOF). The basis of the proposed method is applying appropriate input transforms (model respecification) that deal with these issues while preserving the information content of the original variables. The transforms are selected from symbolic regression models with optimal trade-off between accuracy of prediction and expressional complexity, generated by multiobjective Pareto-front GP. The chapter includes a comparative study of the GP-generated transforms with Ridge Regression, a variant of ordinary Multiple Linear Regression, which has been a useful and commonly employed approach for reducing multicollinearity. The advantages of GP-generated model respecification are clearly defined and demonstrated. Some recommendations for transforms selection are given as well. The application benefits of the proposed approach are illustrated with a real industrial application in one of the broadest empirical modeling areas in manufacturing - robust inferential sensors. The chapter contributes to increasing the awareness of the potential of GP in statistical model building by MLR.

  7. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks

    PubMed Central

    Miconi, Thomas

    2017-01-01

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528

  8. Biologically plausible learning in recurrent neural networks reproduces neural dynamics observed during cognitive tasks.

    PubMed

    Miconi, Thomas

    2017-02-23

    Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.

  9. Revised Multi-Node Well (MNW2) Package for MODFLOW Ground-Water Flow Model

    USGS Publications Warehouse

    Konikow, Leonard F.; Hornberger, George Z.; Halford, Keith J.; Hanson, Randall T.; Harbaugh, Arlen W.

    2009-01-01

    Wells that are open to multiple aquifers can provide preferential pathways to flow and solute transport that short-circuit normal fluid flowlines. Representing these features in a regional flow model can produce a more realistic and reliable simulation model. This report describes modifications to the Multi-Node Well (MNW) Package of the U.S. Geological Survey (USGS) three-dimensional ground-water flow model (MODFLOW). The modifications build on a previous version and add several new features, processes, and input and output options. The input structure of the revised MNW (MNW2) is more well-centered than the original verion of MNW (MNW1) and allows the user to easily define hydraulic characteristics of each multi-node well. MNW2 also allows calculations of additional head changes due to partial penetration effects, flow into a borehole through a seepage face, changes in well discharge related to changes in lift for a given pump, and intraborehole flows with a pump intake located at any specified depth within the well. MNW2 also offers an improved capability to simulate nonvertical wells. A new output option allows selected multi-node wells to be designated as 'observation wells' for which changes in selected variables with time will be written to separate output files to facilitate postprocessing. MNW2 is compatible with the MODFLOW-2000 and MODFLOW-2005 versions of MODFLOW and with the version of MODFLOW that includes the Ground-Water Transport process (MODFLOW-GWT).

  10. A designed screening study with prespecified combinations of factor settings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson-cook, Christine M; Robinson, Timothy J

    2009-01-01

    In many applications, the experimenter has limited options about what factor combinations can be chosen for a designed study. Consider a screening study for a production process involving five input factors whose levels have been previously established. The goal of the study is to understand the effect of each factor on the response, a variable that is expensive to measure and results in destruction of the part. From an inventory of available parts with known factor values, we wish to identify a best collection of factor combinations with which to estimate the factor effects. Though the observational nature of themore » study cannot establish a causal relationship involving the response and the factors, the study can increase understanding of the underlying process. The study can also help determine where investment should be made to control input factors during production that will maximally influence the response. Because the factor combinations are observational, the chosen model matrix will be nonorthogonal and will not allow independent estimation of factor effects. In this manuscript we borrow principles from design of experiments to suggest an 'optimal' selection of factor combinations. Specifically, we consider precision of model parameter estimates, the issue of replication, and abilities to detect lack of fit and to estimate two-factor interactions. Through an example, we present strategies for selecting a subset of factor combinations that simultaneously balance multiple objectives, conduct a limited sensitivity analysis, and provide practical guidance for implementing our techniques across a variety of quality engineering disciplines.« less

  11. Evaluating variable rate fungicide applications for control of Sclerotinia

    USDA-ARS?s Scientific Manuscript database

    Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...

  12. Spatial Variability of Nitrogen Isotope Ratios of Particulate Material from Northwest Atlantic Continental Shelf Waters

    EPA Science Inventory

    Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...

  13. Learning a Novel Pattern through Balanced and Skewed Input

    ERIC Educational Resources Information Center

    McDonough, Kim; Trofimovich, Pavel

    2013-01-01

    This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…

  14. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  15. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  16. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  17. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  18. Linking annual N2O emission in organic soils to mineral nitrogen input as estimated by heterotrophic respiration and soil C/N ratio.

    PubMed

    Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti

    2014-01-01

    Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.

  19. Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique

    NASA Astrophysics Data System (ADS)

    Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang

    2017-04-01

    The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.

  20. Relating Solar Resource Variability to Cloud Type

    NASA Astrophysics Data System (ADS)

    Hinkelman, L. M.; Sengupta, M.

    2012-12-01

    Power production from renewable energy (RE) resources is rapidly increasing. Generation of renewable energy is quite variable since the solar and wind resources that form the inputs are, themselves, inherently variable. There is thus a need to understand the impact of renewable generation on the transmission grid. Such studies require estimates of high temporal and spatial resolution power output under various scenarios, which can be created from corresponding solar resource data. Satellite-based solar resource estimates are the best source of long-term solar irradiance data for the typically large areas covered by transmission studies. As satellite-based resource datasets are generally available at lower temporal and spatial resolution than required, there is, in turn, a need to downscale these resource data. Downscaling in both space and time requires information about solar irradiance variability, which is primarily a function of cloud types and properties. In this study, we analyze the relationship between solar resource variability and satellite-based cloud properties. One-minute resolution surface irradiance data were obtained from a number of stations operated by the National Oceanic and Atmospheric Administration (NOAA) under the Surface Radiation (SURFRAD) and Integrated Surface Irradiance Study (ISIS) networks as well as from NREL's Solar Radiation Research Laboratory (SRRL) in Golden, Colorado. Individual sites were selected so that a range of meteorological conditions would be represented. Cloud information at a nominal 4 km resolution and half hour intervals was derived from NOAA's Geostationary Operation Environmental Satellite (GOES) series of satellites. Cloud class information from the GOES data set was then used to select and composite irradiance data from the measurement sites. The irradiance variability for each cloud classification was characterized using general statistics of the fluxes themselves and their variability in time, as represented by ramps computed for time scales from 10 s to 0.5 hr. The statistical relationships derived using this method will be presented, comparing and contrasting the statistics computed for the different cloud types. The implications for downscaling irradiances from satellites or forecast models will also be discussed.

Top