Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
NASA Astrophysics Data System (ADS)
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti
2017-12-31
Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
Harmonize input selection for sediment transport prediction
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed
2017-09-01
In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.
Bottom-up and Top-down Input Augment the Variability of Cortical Neurons
Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.
2016-01-01
SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459
Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices
NASA Technical Reports Server (NTRS)
Emmen, R. D.; Tischler, M. B.
1982-01-01
This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Negro, Francesco; Farina, Dario
2017-01-01
We investigated whether correlation measures derived from pairs of motor unit (MU) spike trains are reliable indicators of the degree of common synaptic input to motor neurons. Several 50-s isometric contractions of the biceps brachii muscle were performed at different target forces ranging from 10 to 30% of the maximal voluntary contraction relying on force feedback. Forty-eight pairs of MUs were examined at various force levels. Motor unit synchrony was assessed by cross-correlation analysis using three indexes: the output correlation as the peak of the cross-histogram (ρ) and the number of synchronous spikes per second (CIS) and per trigger (E). Individual analysis of MU pairs revealed that ρ, CIS, and E were most often positively associated with discharge rate (87, 85, and 76% of the MU pairs, respectively) and negatively with interspike interval variability (69, 65, and 62% of the MU pairs, respectively). Moreover, the behavior of synchronization indexes with discharge rate (and interspike interval variability) varied greatly among the MU pairs. These results were consistent with theoretical predictions, which showed that the output correlation between pairs of spike trains depends on the statistics of the input current and motor neuron intrinsic properties that differ for different motor neuron pairs. In conclusion, the synchronization between MU firing trains is necessarily caused by the (functional) common input to motor neurons, but it is not possible to infer the degree of shared common input to a pair of motor neurons on the basis of correlation measures of their output spike trains. NEW & NOTEWORTHY The strength of correlation between output spike trains is only poorly associated with the degree of common input to the population of motor neurons. The synchronization between motor unit firing trains is necessarily caused by the (functional) common input to motor neurons, but it is not possible to infer the degree of shared common input to a pair of motor neurons on the basis of correlation measures of their output spike trains. PMID:28100652
ERIC Educational Resources Information Center
Lynn, Richard; Vanhanen, Tatu
2012-01-01
This paper summarizes the results of 244 correlates of national IQs that have been published from 2002 through 2012 and include educational attainment, cognitive output, educational input, per capita income, economic growth, other economic variables, crime, political institutions, health, fertility, sociological variables, and geographic and…
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
SCI model structure determination program (OSR) user's guide. [optimal subset regression
NASA Technical Reports Server (NTRS)
1979-01-01
The computer program, OSR (Optimal Subset Regression) which estimates models for rotorcraft body and rotor force and moment coefficients is described. The technique used is based on the subset regression algorithm. Given time histories of aerodynamic coefficients, aerodynamic variables, and control inputs, the program computes correlation between various time histories. The model structure determination is based on these correlations. Inputs and outputs of the program are given.
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Cloern, James E.; Jassby, Alan D.; Carstensen, Jacob; Bennett, William A.; Kimmerer, Wim; Mac Nally, Ralph; Schoellhamer, David H.; Winder, Monika
2012-01-01
We comment on a nonstandard statistical treatment of time-series data first published by Breton et al. (2006) in Limnology and Oceanography and, more recently, used by Glibert (2010) in Reviews in Fisheries Science. In both papers, the authors make strong inferences about the underlying causes of population variability based on correlations between cumulative sum (CUSUM) transformations of organism abundances and environmental variables. Breton et al. (2006) reported correlations between CUSUM-transformed values of diatom biomass in Belgian coastal waters and the North Atlantic Oscillation, and between meteorological and hydrological variables. Each correlation of CUSUM-transformed variables was judged to be statistically significant. On the basis of these correlations, Breton et al. (2006) developed "the first evidence of synergy between climate and human-induced river-based nitrate inputs with respect to their effects on the magnitude of spring Phaeocystis colony blooms and their dominance over diatoms."
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu
Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Bazzani, Armando; Castellani, Gastone C; Cooper, Leon N
2010-05-01
We analyze the effects of noise correlations in the input to, or among, Bienenstock-Cooper-Munro neurons using the Wigner semicircular law to construct random, positive-definite symmetric correlation matrices and compute their eigenvalue distributions. In the finite dimensional case, we compare our analytic results with numerical simulations and show the effects of correlations on the lifetimes of synaptic strengths in various visual environments. These correlations can be due either to correlations in the noise from the input lateral geniculate nucleus neurons, or correlations in the variability of lateral connections in a network of neurons. In particular, we find that for fixed dimensionality, a large noise variance can give rise to long lifetimes of synaptic strengths. This may be of physiological significance.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Levi, L.; Cvetkovic, V.; Destouni, G.
2015-12-01
This study compiles estimates of waterborne nutrient concentrations and loads in the Sava River Catchment (SRC). Based on this compilation, we investigate hotspots of nutrient inputs and retention along the river, as well as concentration and load correlations with river discharge and various human drivers of excess nutrient inputs to the SRC. For cross-regional assessment and possible generalization, we also compare corresponding results between the SRC and the Baltic Sea Drainage Basin (BSDB). In the SRC, one small incremental subcatchment, which is located just downstream of Zagreb and has the highest population density among the SRC subcatchments, is identified as a major hotspot for net loading (input minus retention) of both total nitrogen (TN) and total phosphorus (TP) to the river and through it to downstream areas of the SRC. The other SRC subcatchments exhibit relatively similar characteristics with smaller net nutrient loading. The annual loads of both TN and TP along the Sava River exhibit dominant temporal variability with considerably higher correlation with annual river discharge (R2 = 0.51 and 0.28, respectively) than that of annual average nutrient concentrations (R2 = 0.0 versus discharge for both TN and TP). Nutrient concentrations exhibit instead dominant spatial variability with relatively high correlation with population density among the SRC subcatchments (R2=0.43-0.64). These SRC correlation characteristics compare well with corresponding ones for the BSDB, even though the two regions are quite different in their hydroclimatic, agricultural and wastewater treatment conditions. Such cross-regional consistency in dominant variability type and explanatory catchment characteristics may be a useful generalization basis, worthy of further investigation, for at least first-order estimation of nutrient concentration and load conditions in less data-rich regions.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
Van Hertem, T; Maltz, E; Antler, A; Romanini, C E B; Viazzi, S; Bahr, C; Schlageter-Tello, A; Lokhorst, C; Berckmans, D; Halachmi, I
2013-07-01
The objective of this study was to develop and validate a mathematical model to detect clinical lameness based on existing sensor data that relate to the behavior and performance of cows in a commercial dairy farm. Identification of lame (44) and not lame (74) cows in the database was done based on the farm's daily herd health reports. All cows were equipped with a behavior sensor that measured neck activity and ruminating time. The cow's performance was measured with a milk yield meter in the milking parlor. In total, 38 model input variables were constructed from the sensor data comprising absolute values, relative values, daily standard deviations, slope coefficients, daytime and nighttime periods, variables related to individual temperament, and milk session-related variables. A lame group, cows recognized and treated for lameness, to not lame group comparison of daily data was done. Correlations between the dichotomous output variable (lame or not lame) and the model input variables were made. The highest correlation coefficient was obtained for the milk yield variable (rMY=0.45). In addition, a logistic regression model was developed based on the 7 highest correlated model input variables (the daily milk yield 4d before diagnosis; the slope coefficient of the daily milk yield 4d before diagnosis; the nighttime to daytime neck activity ratio 6d before diagnosis; the milk yield week difference ratio 4d before diagnosis; the milk yield week difference 4d before diagnosis; the neck activity level during the daytime 7d before diagnosis; the ruminating time during nighttime 6d before diagnosis). After a 10-fold cross-validation, the model obtained a sensitivity of 0.89 and a specificity of 0.85, with a correct classification rate of 0.86 when based on the averaged 10-fold model coefficients. This study demonstrates that existing farm data initially used for other purposes, such as heat detection, can be exploited for the automated detection of clinically lame animals on a daily basis as well. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.
Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi
2013-05-01
Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.
Multiple-input multiple-output causal strategies for gene selection.
Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John
2011-11-25
Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.
Origin of information-limiting noise correlations
Kanitscheider, Ingmar; Coen-Cagli, Ruben; Pouget, Alexandre
2015-01-01
The ability to discriminate between similar sensory stimuli relies on the amount of information encoded in sensory neuronal populations. Such information can be substantially reduced by correlated trial-to-trial variability. Noise correlations have been measured across a wide range of areas in the brain, but their origin is still far from clear. Here we show analytically and with simulations that optimal computation on inputs with limited information creates patterns of noise correlations that account for a broad range of experimental observations while at same time causing information to saturate in large neural populations. With the example of a network of V1 neurons extracting orientation from a noisy image, we illustrate to our knowledge the first generative model of noise correlations that is consistent both with neurophysiology and with behavioral thresholds, without invoking suboptimal encoding or decoding or internal sources of variability such as stochastic network dynamics or cortical state fluctuations. We further show that when information is limited at the input, both suboptimal connectivity and internal fluctuations could similarly reduce the asymptotic information, but they have qualitatively different effects on correlations leading to specific experimental predictions. Our study indicates that noise at the sensory periphery could have a major effect on cortical representations in widely studied discrimination tasks. It also provides an analytical framework to understand the functional relevance of different sources of experimentally measured correlations. PMID:26621747
Zachary A. Holden; Charles H. Luce; Michael A. Crimmins; Penelope Morgan
2011-01-01
Climate change effects on wildfire occurrence have been attributed primarily to increases in temperatures causing earlier snowpack ablation and longer fire seasons. Variability in precipitation is also an important control on snowpack accumulation and, therefore, on timing of meltwater inputs. We evaluate the correlation of total area burned and area burned severely to...
Attentional modulation of neuronal variability in circuit models of cortex
Kanashiro, Tatjana; Ocker, Gabriel Koch; Cohen, Marlene R; Doiron, Brent
2017-01-01
The circuit mechanisms behind shared neural variability (noise correlation) and its dependence on neural state are poorly understood. Visual attention is well-suited to constrain cortical models of response variability because attention both increases firing rates and their stimulus sensitivity, as well as decreases noise correlations. We provide a novel analysis of population recordings in rhesus primate visual area V4 showing that a single biophysical mechanism may underlie these diverse neural correlates of attention. We explore model cortical networks where top-down mediated increases in excitability, distributed across excitatory and inhibitory targets, capture the key neuronal correlates of attention. Our models predict that top-down signals primarily affect inhibitory neurons, whereas excitatory neurons are more sensitive to stimulus specific bottom-up inputs. Accounting for trial variability in models of state dependent modulation of neuronal activity is a critical step in building a mechanistic theory of neuronal cognition. DOI: http://dx.doi.org/10.7554/eLife.23978.001 PMID:28590902
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
Pirozzi, Enrica
2018-04-01
High variability in the neuronal response to stimulations and the adaptation phenomenon cannot be explained by the standard stochastic leaky integrate-and-fire model. The main reason is that the uncorrelated inputs involved in the model are not realistic. There exists some form of dependency between the inputs, and it can be interpreted as memory effects. In order to include these physiological features in the standard model, we reconsider it with time-dependent coefficients and correlated inputs. Due to its hard mathematical tractability, we perform simulations of it for a wide investigation of its output. A Gauss-Markov process is constructed for approximating its non-Markovian dynamics. The first passage time probability density of such a process can be numerically evaluated, and it can be used to fit the histograms of simulated firing times. Some estimates of the moments of firing times are also provided. The effect of the correlation time of the inputs on firing densities and on firing rates is shown. An exponential probability density of the first firing time is estimated for low values of input current and high values of correlation time. For comparison, a simulation-based investigation is also carried out for a fractional stochastic model that allows to preserve the memory of the time evolution of the neuronal membrane potential. In this case, the memory parameter that affects the firing activity is the fractional derivative order. In both models an adaptation level of spike frequency is attained, even if along different modalities. Comparisons and discussion of the obtained results are provided.
Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli
2017-10-01
Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1 yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.
A Non-Simulation Based Method for Inducing Pearson’s Correlation Between Input Random Variables
2008-04-23
Systems 500 Auxillary Systems 600 Outfit & Furnishings 700 Weapons 800 Integration & Engineering 900 Ship Assembly & Support Total SWBS Description...Upside Probable Downside 000 Administration 100 Hull 200 Propulsion 300 Electric Plant 400 Electonics Systems 500 Auxillary Systems 600 Outfit
Relaxation method of compensation in an optical correlator
NASA Technical Reports Server (NTRS)
Juday, Richard D.; Daiuto, Brian J.
1987-01-01
An iterative method is proposed for the sharpening of programmable filters in a 4-f optical correlator. Continuously variable spatial light modulators (SLMs) permit the fine adjustment of optical processing filters so as to compensate for the departures from ideal behavior of a real optical system. Although motivated by the development of continuously variable phase-only SLMs, the proposed sharpening method is also applicable to amplitude modulators and, with appropriate adjustments, to binary modulators as well. A computer simulation is presented that illustrates the potential effectiveness of the method: an image is placed on the input to the correlator, and its corresponding phase-only filter is adjusted (allowed to relax) so as to produce a progressively brighter and more centralized peak in the correlation plane. The technique is highly robust against the form of the system's departure from ideal behavior.
Liu, Yu; Xi, Du-Gang; Li, Zhao-Liang
2015-01-01
Predicting the levels of chlorophyll-a (Chl-a) is a vital component of water quality management, which ensures that urban drinking water is safe from harmful algal blooms. This study developed a model to predict Chl-a levels in the Yuqiao Reservoir (Tianjin, China) biweekly using water quality and meteorological data from 1999-2012. First, six artificial neural networks (ANNs) and two non-ANN methods (principal component analysis and the support vector regression model) were compared to determine the appropriate training principle. Subsequently, three predictors with different input variables were developed to examine the feasibility of incorporating meteorological factors into Chl-a prediction, which usually only uses water quality data. Finally, a sensitivity analysis was performed to examine how the Chl-a predictor reacts to changes in input variables. The results were as follows: first, ANN is a powerful predictive alternative to the traditional modeling techniques used for Chl-a prediction. The back program (BP) model yields slightly better results than all other ANNs, with the normalized mean square error (NMSE), the correlation coefficient (Corr), and the Nash-Sutcliffe coefficient of efficiency (NSE) at 0.003 mg/l, 0.880 and 0.754, respectively, in the testing period. Second, the incorporation of meteorological data greatly improved Chl-a prediction compared to models solely using water quality factors or meteorological data; the correlation coefficient increased from 0.574-0.686 to 0.880 when meteorological data were included. Finally, the Chl-a predictor is more sensitive to air pressure and pH compared to other water quality and meteorological variables.
2015-10-28
techniques such as regression analysis, correlation, and multicollinearity assessment to identify the change and error on the input to the model...between many of the independent or predictor variables, the issue of multicollinearity may arise [18]. VII. SUMMARY Accurate decisions concerning
Variable Selection through Correlation Sifting
NASA Astrophysics Data System (ADS)
Huang, Jim C.; Jojic, Nebojsa
Many applications of computational biology require a variable selection procedure to sift through a large number of input variables and select some smaller number that influence a target variable of interest. For example, in virology, only some small number of viral protein fragments influence the nature of the immune response during viral infection. Due to the large number of variables to be considered, a brute-force search for the subset of variables is in general intractable. To approximate this, methods based on ℓ1-regularized linear regression have been proposed and have been found to be particularly successful. It is well understood however that such methods fail to choose the correct subset of variables if these are highly correlated with other "decoy" variables. We present a method for sifting through sets of highly correlated variables which leads to higher accuracy in selecting the correct variables. The main innovation is a filtering step that reduces correlations among variables to be selected, making the ℓ1-regularization effective for datasets on which many methods for variable selection fail. The filtering step changes both the values of the predictor variables and output values by projections onto components obtained through a computationally-inexpensive principal components analysis. In this paper we demonstrate the usefulness of our method on synthetic datasets and on novel applications in virology. These include HIV viral load analysis based on patients' HIV sequences and immune types, as well as the analysis of seasonal variation in influenza death rates based on the regions of the influenza genome that undergo diversifying selection in the previous season.
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
High dimensional model representation method for fuzzy structural dynamics
NASA Astrophysics Data System (ADS)
Adhikari, S.; Chowdhury, R.; Friswell, M. I.
2011-03-01
Uncertainty propagation in multi-parameter complex structures possess significant computational challenges. This paper investigates the possibility of using the High Dimensional Model Representation (HDMR) approach when uncertain system parameters are modeled using fuzzy variables. In particular, the application of HDMR is proposed for fuzzy finite element analysis of linear dynamical systems. The HDMR expansion is an efficient formulation for high-dimensional mapping in complex systems if the higher order variable correlations are weak, thereby permitting the input-output relationship behavior to be captured by the terms of low-order. The computational effort to determine the expansion functions using the α-cut method scales polynomically with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is first illustrated for multi-parameter nonlinear mathematical test functions with fuzzy variables. The method is then integrated with a commercial finite element software (ADINA). Modal analysis of a simplified aircraft wing with fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations. It is shown that using the proposed HDMR approach, the number of finite element function calls can be reduced without significantly compromising the accuracy.
Marder, Eve
2015-01-01
For decades, the episodic gastric rhythm of the crustacean stomatogastric nervous system (STNS) has served as an important model system for understanding the generation of rhythmic motor behaviors. Here we quantitatively describe many features of the gastric rhythm of the crab Cancer borealis under several conditions. First, we analyzed spontaneous gastric rhythms produced by freshly dissected preparations of the STNS, including the cycle frequency and phase relationships among gastric units. We find that phase is relatively conserved across frequency, similar to the pyloric rhythm. We also describe relationships between these two rhythms, including a significant gastric/pyloric frequency correlation. We then performed continuous, days-long extracellular recordings of gastric activity from preparations of the STNS in which neuromodulatory inputs to the stomatogastric ganglion were left intact and also from preparations in which these modulatory inputs were cut (decentralization). This allowed us to provide quantitative descriptions of variability and phase conservation within preparations across time. For intact preparations, gastric activity was more variable than pyloric activity but remained relatively stable across 4–6 days, and many significant correlations were found between phase and frequency within animals. Decentralized preparations displayed fewer episodes of gastric activity, with altered phase relationships, lower frequencies, and reduced coordination both among gastric units and between the gastric and pyloric rhythms. Together, these results provide insight into the role of neuromodulation in episodic pattern generation and the extent of animal-to-animal variability in features of spontaneously occurring gastric rhythms. PMID:26156388
1991-10-01
o n tr o l sy st em St ic ks , m ix in g, S A S , se rv os L an d in g I n te rf ac e - V er ti ca l lo ad s - In pl an e lo ad s E xt...ROUTINE ALPNPlt ,INPUT VARIABLE QFiQWrM »OUTPUT VARIABLE OP1LO ;LOW ANGLE BAP NABE EXP -30.0,30.0,5.0 jLOWER LIBIT. UPPER LIMIT, DELTA ; LOK ANGLE BAP
Computational implications of activity-dependent neuronal processes
NASA Astrophysics Data System (ADS)
Goldman, Mark Steven
Synapses, the connections between neurons, often fail to transmit a large percentage of the action potentials that they receive. I describe several models of synaptic transmission at a single stochastic synapse with an activity-dependent probability of transmission and demonstrate how synaptic transmission failures may increase the efficiency with which a synapse transmits information. Spike trains in the visual cortex of freely viewing monkeys have positive auto correlations that are indicative of a redundant representation of the information they contain. I show how a synapse with activity-dependent transmission failures modeled after those occurring in visual cortical synapses can remove this redundancy by transmitting a decorrelated subset of the spike trains it receives. I suggest that redundancy reduction at individual synapses saves synaptic resources while increasing the sensitivity of the postsynaptic neuron to information arriving along many inputs. For a neuron receiving input from many decorrelating synapses, my analysis leads to a prediction of the number of visual inputs to a neuron and the cross-correlations between these inputs and suggests that the time scale of synaptic dynamics observed in sensory areas corresponds to a fundamental time scale for processing sensory information. Systems with activity-dependent changes in their parameters, or plasticity, often display a wide variability in their individual components that belies the stability of their function, Motivated by experiments demonstrating that identified neurons with stereotyped function can have a large variability in the densities of their ion channels, or ionic conductances, I build a conductance-based model of a single neuron. The neuron's firing activity is relatively insensitive to changes in certain combinations of conductances, but markedly sensitive to changes in other combinations. Using a combined modeling and experimental approach, I show that neuromodulators and regulatory processes target sensitive combinations of conductances. I suggest that the variability observed in conductance measurements occurs along insensitive combinations of conductances and could result from homeostatic processes that allow the neuron's conductances to drift without triggering activity- dependent feedback mechanisms. These results together suggest that plastic systems may have a high degree of flexibility and variability in their components without a loss of robustness in their response properties.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Class identity assignment for amphetamines using neural networks and GC-FTIR data
NASA Astrophysics Data System (ADS)
Gosav, S.; Praisler, M.; Van Bocxlaer, J.; De Leenheer, A. P.; Massart, D. L.
2006-08-01
An exploratory analysis was performed in order to evaluate the feasibility of building of neural network (NN) systems automating the identification of amphetamines necessary in the investigation of drugs of abuse for epidemiological, clinical and forensic purposes. A first neural network system was built to distinguish between amphetamines and nonamphetamines. A second, more refined system, aimed to the recognition of amphetamines according to their toxicological activity (stimulant amphetamines, hallucinogenic amphetamines, nonamphetamines). Both systems proved that discrimination between amphetamines and nonamphetamines, as well as between stimulants, hallucinogens and nonamphetamines is possible (83.44% and 85.71% correct classification rate, respectively). The spectroscopic interpretation of the 40 most important input variables (GC-FTIR absorption intensities) shows that the modeling power of an input variable seems to be correlated with the stability and not with the intensity of the spectral interaction. Thus, discarding variables only because they correspond to spectral windows with weak absorptions does not seem be not advisable.
Coding stimulus amplitude by correlated neural activity
NASA Astrophysics Data System (ADS)
Metzen, Michael G.; Ávila-Åkerberg, Oscar; Chacron, Maurice J.
2015-04-01
While correlated activity is observed ubiquitously in the brain, its role in neural coding has remained controversial. Recent experimental results have demonstrated that correlated but not single-neuron activity can encode the detailed time course of the instantaneous amplitude (i.e., envelope) of a stimulus. These have furthermore demonstrated that such coding required and was optimal for a nonzero level of neural variability. However, a theoretical understanding of these results is still lacking. Here we provide a comprehensive theoretical framework explaining these experimental findings. Specifically, we use linear response theory to derive an expression relating the correlation coefficient to the instantaneous stimulus amplitude, which takes into account key single-neuron properties such as firing rate and variability as quantified by the coefficient of variation. The theoretical prediction was in excellent agreement with numerical simulations of various integrate-and-fire type neuron models for various parameter values. Further, we demonstrate a form of stochastic resonance as optimal coding of stimulus variance by correlated activity occurs for a nonzero value of noise intensity. Thus, our results provide a theoretical explanation of the phenomenon by which correlated but not single-neuron activity can code for stimulus amplitude and how key single-neuron properties such as firing rate and variability influence such coding. Correlation coding by correlated but not single-neuron activity is thus predicted to be a ubiquitous feature of sensory processing for neurons responding to weak input.
A Load-Based Temperature Prediction Model for Anomaly Detection
NASA Astrophysics Data System (ADS)
Sobhani, Masoud
Electric load forecasting, as a basic requirement for the decision-making in power utilities, has been improved in various aspects in the past decades. Many factors may affect the accuracy of the load forecasts, such as data quality, goodness of the underlying model and load composition. Due to the strong correlation between the input variables (e.g., weather and calendar variables) and the load, the quality of input data plays a vital role in forecasting practices. Even if the forecasting model were able to capture most of the salient features of the load, a low quality input data may result in inaccurate forecasts. Most of the data cleansing efforts in the load forecasting literature have been devoted to the load data. Few studies focused on weather data cleansing for load forecasting. This research proposes an anomaly detection method for the temperature data. The method consists of two components: a load-based temperature prediction model and a detection technique. The effectiveness of the proposed method is demonstrated through two case studies: one based on the data from the Global Energy Forecasting Competition 2014, and the other based on the data published by ISO New England. The results show that by removing the detected observations from the original input data, the final load forecast accuracy is enhanced.
When Can Information from Ordinal Scale Variables Be Integrated?
ERIC Educational Resources Information Center
Kemp, Simon; Grace, Randolph C.
2010-01-01
Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…
Quantum Common Causes and Quantum Causal Models
NASA Astrophysics Data System (ADS)
Allen, John-Mark A.; Barrett, Jonathan; Horsman, Dominic C.; Lee, Ciarán M.; Spekkens, Robert W.
2017-07-01
Reichenbach's principle asserts that if two observed variables are found to be correlated, then there should be a causal explanation of these correlations. Furthermore, if the explanation is in terms of a common cause, then the conditional probability distribution over the variables given the complete common cause should factorize. The principle is generalized by the formalism of causal models, in which the causal relationships among variables constrain the form of their joint probability distribution. In the quantum case, however, the observed correlations in Bell experiments cannot be explained in the manner Reichenbach's principle would seem to demand. Motivated by this, we introduce a quantum counterpart to the principle. We demonstrate that under the assumption that quantum dynamics is fundamentally unitary, if a quantum channel with input A and outputs B and C is compatible with A being a complete common cause of B and C , then it must factorize in a particular way. Finally, we show how to generalize our quantum version of Reichenbach's principle to a formalism for quantum causal models and provide examples of how the formalism works.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoban, Matty J.; Department of Computer Science, University of Oxford, Wolfson Building, Parks Road, Oxford OX1 3QD; Wallman, Joel J.
We consider general settings of Bell inequality experiments with many parties, where each party chooses from a finite number of measurement settings each with a finite number of outcomes. We investigate the constraints that Bell inequalities place upon the correlations possible in local hidden variable theories using a geometrical picture of correlations. We show that local hidden variable theories can be characterized in terms of limited computational expressiveness, which allows us to characterize families of Bell inequalities. The limited computational expressiveness for many settings (each with many outcomes) generalizes previous results about the many-party situation each with a choice ofmore » two possible measurements (each with two outcomes). Using this computational picture we present generalizations of the Popescu-Rohrlich nonlocal box for many parties and nonbinary inputs and outputs at each site. Finally, we comment on the effect of preprocessing on measurement data in our generalized setting and show that it becomes problematic outside of the binary setting, in that it allows local hidden variable theories to simulate maximally nonlocal correlations such as those of these generalized Popescu-Rohrlich nonlocal boxes.« less
NASA Astrophysics Data System (ADS)
Wang, Tingting; Sun, Fubao; Xia, Jun; Liu, Wenbin; Sang, Yanfang
2017-04-01
In predicting how droughts and hydrological cycles would change in a warming climate, change of atmospheric evaporative demand measured by pan evaporation (Epan) is one crucial element to be understood. Over the last decade, the derived partial differential (PD) form of the PenPan equation is a prevailing attribution approach to attributing changes to Epan worldwide. However, the independency among climatic variables required by the PD approach cannot be met using long term observations. Here we designed a series of numerical experiments to attribute changes of Epan over China by detrending each climatic variable, i.e., an experimental detrending approach, to address the inter-correlation among climate variables, and made comparison with the traditional PD method. The results show that the detrending approach is superior not only to a complicate system with multi-variables and mixing algorithm like aerodynamic component (Ep,A) and Epan, but also to a simple case like radiative component (Ep,R), when compared with traditional PD method. The major reason for this is the strong and significant inter-correlation of input meteorological forcing. Very similar and fine attributing results have been achieved based on detrending approach and PD method after eliminating the inter-correlation of input through a randomize approach. The contribution of Rh and Ta in net radiation and thus Ep,R, which has been overlooked based on the PD method but successfully detected by detrending approach, provides some explanation to the comparing results. We adopted the control run from the detrending approach and applied it to made adjustment of PD method. Much improvement has been made and thus proven this adjustment an effective way in attributing changes to Epan. Hence, the detrending approach and the adjusted PD method are well recommended in attributing changes in hydrological models to better understand and predict water and energy cycle.
Narrative skills in two languages of Mandarin-English bilingual children.
Hao, Ying; Bedore, Lisa M; Sheng, Li; Peña, Elizabeth D
2018-03-08
Narrative skills between Mandarin and English in Mandarin-English (ME) bilingual children were compared, exploring cross-linguistic interactions of these skills, and influences of age and current language experience (input and output) on narrative performance. Macrostructure and microstructure in elicited narratives from 21 ME bilingual children were analysed. Language experience was collected by parent report and entered as a covariate. Repeated measures analysis of covariance (ANCOVA) was conducted to compare the two languages. Children demonstrated better narrative performance in English than Mandarin, with a larger cross-linguistic difference in microstructure than macrostructure. Significant cross-linguistic correlations were only found in children with high Mandarin vocabulary. Age, associated with length of English exposure, only significantly correlated with narrative performance in English. Output had stronger correlations with narrative skills than input. Macrostructure may be less variable across languages than microstructure. Children may need to reach a threshold of vocabulary for cross-linguistic interactions of narrative skills to occur. The effect of age in English may be related to increased cumulative English experience. Children may experience a plateau in Mandarin due to insufficient Mandarin exposure. Stronger correlations between output and narrative skills may be attributed to the expressive nature of both.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
The application of statistically designed experiments to resistance spot welding
NASA Technical Reports Server (NTRS)
Hafley, Robert A.; Hales, Stephen J.
1991-01-01
State-of-the-art Resistance Spot Welding (RSW) equipment has the potential to permit realtime monitoring of operations through advances in computerized process control. In order to realize adaptive feedback capabilities, it is necessary to establish correlations among process variables, welder outputs, and weldment properties. The initial step toward achieving this goal must involve assessment of the effect of specific process inputs and the interactions among these variables on spot weld characteristics. This investigation evaluated these effects through the application of a statistically designed experiment to the RSW process. A half-factorial, Taguchi L sub 16 design was used to understand and refine a RSW schedule developed for welding dissimilar aluminum-lithium alloys of different thickness. The baseline schedule had been established previously by traditional trial and error methods based on engineering judgment and one-factor-at-a-time studies. A hierarchy of inputs with respect to each other was established, and the significance of these inputs with respect to experimental noise was determined. Useful insight was gained into the effect of interactions among process variables, particularly with respect to weldment defects. The effects of equipment related changes associated with disassembly and recalibration were also identified. In spite of an apparent decrease in equipment performance, a significant improvement in the maximum strength for defect-free welds compared to the baseline schedule was achieved.
NASA Astrophysics Data System (ADS)
Peterson, Fox S.; Lajtha, Kate J.
2013-07-01
Factors influencing soil organic matter (SOM) stabilization and dissolved organic carbon (DOC) content in complex terrain, where vegetation, climate, and topography vary over the scale of a few meters, are not well understood. We examined the spatial correlations of lidar and geographic information system-derived landscape topography, empirically measured soil characteristics, and current and historical vegetation composition and structure versus SOM fractions and DOC pools and leaching on a small catchment (WS1) in the H.J. Andrews Experimental Forest, located in the western Cascades Range of Oregon, USA. We predicted that aboveground net primary productivity (ANPP), litter fall, and nitrogen mineralization would be positively correlated with SOM, DOC, and carbon (C) content of the soil based on the principle that increased C inputs cause C stores in and losses from in the soil. We expected that in tandem, certain microtopographical and microclimatic characteristics might be associated with elevated C inputs and correspondingly, soil C stores and losses. We confirmed that on this site, positive relationships exist between ANPP, C inputs (litter fall), and losses (exportable DOC), but we did not find that these relationships between ANPP, inputs, and exports were translated to SOM stores (mg C/g soil), C content of the soil (% C/g soil), or DOC pools (determined with salt and water extractions). We suggest that the biogeochemical processes controlling C storage and lability in soil may relate to longer-term variability in aboveground inputs that result from a heterogeneous and evolving forest stand.
Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm
NASA Astrophysics Data System (ADS)
Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana
2017-12-01
Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.
Analysis of fMRI data using noise-diffusion network models: a new covariance-coding perspective.
Gilson, Matthieu
2018-04-01
Since the middle of the 1990s, studies of resting-state fMRI/BOLD data have explored the correlation patterns of activity across the whole brain, which is referred to as functional connectivity (FC). Among the many methods that have been developed to interpret FC, a recently proposed model-based approach describes the propagation of fluctuating BOLD activity within the recurrently connected brain network by inferring the effective connectivity (EC). In this model, EC quantifies the strengths of directional interactions between brain regions, viewed from the proxy of BOLD activity. In addition, the tuning procedure for the model provides estimates for the local variability (input variances) to explain how the observed FC is generated. Generalizing, the network dynamics can be studied in the context of an input-output mapping-determined by EC-for the second-order statistics of fluctuating nodal activities. The present paper focuses on the following detection paradigm: observing output covariances, how discriminative is the (estimated) network model with respect to various input covariance patterns? An application with the model fitted to experimental fMRI data-movie viewing versus resting state-illustrates that changes in local variability and changes in brain coordination go hand in hand.
Kim, HyungGoo R.; Pitkow, Xaq; Angelaki, Dora E.
2016-01-01
Sensory input reflects events that occur in the environment, but multiple events may be confounded in sensory signals. For example, under many natural viewing conditions, retinal image motion reflects some combination of self-motion and movement of objects in the world. To estimate one stimulus event and ignore others, the brain can perform marginalization operations, but the neural bases of these operations are poorly understood. Using computational modeling, we examine how multisensory signals may be processed to estimate the direction of self-motion (i.e., heading) and to marginalize out effects of object motion. Multisensory neurons represent heading based on both visual and vestibular inputs and come in two basic types: “congruent” and “opposite” cells. Congruent cells have matched heading tuning for visual and vestibular cues and have been linked to perceptual benefits of cue integration during heading discrimination. Opposite cells have mismatched visual and vestibular heading preferences and are ill-suited for cue integration. We show that decoding a mixed population of congruent and opposite cells substantially reduces errors in heading estimation caused by object motion. In addition, we present a general formulation of an optimal linear decoding scheme that approximates marginalization and can be implemented biologically by simple reinforcement learning mechanisms. We also show that neural response correlations induced by task-irrelevant variables may greatly exceed intrinsic noise correlations. Overall, our findings suggest a general computational strategy by which neurons with mismatched tuning for two different sensory cues may be decoded to perform marginalization operations that dissociate possible causes of sensory inputs. PMID:27334948
Chemical ecology of red mangroves, Rhizophora mangle, in the Hawaiian Islands
Fry, Brian; Cormier, Nicole
2011-01-01
The coastal red mangrove, Rhizophora mangle L., was introduced to the Hawaiian Islands from Florida 100 yr ago and has spread to cover many shallow intertidal shorelines that once were unvegetated mudflats. We used a field survey approach to test whether mangroves at the land-ocean interface could indicate watershed inputs, especially whether measurements of leaf chemistry could identify coasts with high nutrient inputs and high mangrove productivities. During 2001-2002, we sampled mangroves on dry leeward coasts of southern Moloka'i and O'ahu for 14 leaf variables including stable carbon and nitrogen isotopes (delta13C, delta15N), macronutrients (C, N, P), trace elements (B, Mn, Fe, Cu, Zn), and cations (Na, Mg, K, Ca). A new modeling approach using leaf Na, N, P, and delta13C indicated two times higher productivity for mangroves in urban versus rural settings, with rural mangroves more limited by low N and P nutrients and high-nutrient urban mangroves more limited by freshwater inputs and salt stress. Leaf chemistry also helped identify other aspects of mangrove dynamics: especially leaf delta15N values helped identify groundwater N inputs, and a combination of strongly correlated variables (C, N, P, B, Cu, Mg, K, Ca) tracked the mangrove growth response to nutrient loading. Overall, the chemical marker approach is an efficient way to survey watershed forcing of mangrove forest dynamics.
NASA Astrophysics Data System (ADS)
Afrand, Masoud; Hemmat Esfe, Mohammad; Abedini, Ehsan; Teimouri, Hamid
2017-03-01
The current paper first presents an empirical correlation based on experimental results for estimating thermal conductivity enhancement of MgO-water nanofluid using curve fitting method. Then, artificial neural networks (ANNs) with various numbers of neurons have been assessed by considering temperature and MgO volume fraction as the inputs variables and thermal conductivity enhancement as the output variable to select the most appropriate and optimized network. Results indicated that the network with 7 neurons had minimum error. Eventually, the output of artificial neural network was compared with the results of the proposed empirical correlation and those of the experiments. Comparisons revealed that ANN modeling was more accurate than curve-fitting method in the predicting the thermal conductivity enhancement of the nanofluid.
Response sensitivity of barrel neuron subpopulations to simulated thalamic input.
Pesavento, Michael J; Rittenhouse, Cynthia D; Pinto, David J
2010-06-01
Our goal is to examine the relationship between neuron- and network-level processing in the context of a well-studied cortical function, the processing of thalamic input by whisker-barrel circuits in rodent neocortex. Here we focus on neuron-level processing and investigate the responses of excitatory and inhibitory barrel neurons to simulated thalamic inputs applied using the dynamic clamp method in brain slices. Simulated inputs are modeled after real thalamic inputs recorded in vivo in response to brief whisker deflections. Our results suggest that inhibitory neurons require more input to reach firing threshold, but then fire earlier, with less variability, and respond to a broader range of inputs than do excitatory neurons. Differences in the responses of barrel neuron subtypes depend on their intrinsic membrane properties. Neurons with a low input resistance require more input to reach threshold but then fire earlier than neurons with a higher input resistance, regardless of the neuron's classification. Our results also suggest that the response properties of excitatory versus inhibitory barrel neurons are consistent with the response sensitivities of the ensemble barrel network. The short response latency of inhibitory neurons may serve to suppress ensemble barrel responses to asynchronous thalamic input. Correspondingly, whereas neurons acting as part of the barrel circuit in vivo are highly selective for temporally correlated thalamic input, excitatory barrel neurons acting alone in vitro are less so. These data suggest that network-level processing of thalamic input in barrel cortex depends on neuron-level processing of the same input by excitatory and inhibitory barrel neurons.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
Direct connections assist neurons to detect correlation in small amplitude noises
Bolhasani, E.; Azizi, Y.; Valizadeh, A.
2013-01-01
We address a question on the effect of common stochastic inputs on the correlation of the spike trains of two neurons when they are coupled through direct connections. We show that the change in the correlation of small amplitude stochastic inputs can be better detected when the neurons are connected by direct excitatory couplings. Depending on whether intrinsic firing rate of the neurons is identical or slightly different, symmetric or asymmetric connections can increase the sensitivity of the system to the input correlation by changing the mean slope of the correlation transfer function over a given range of input correlation. In either case, there is also an optimum value for synaptic strength which maximizes the sensitivity of the system to the changes in input correlation. PMID:23966940
Neural Classifiers for Learning Higher-Order Correlations
NASA Astrophysics Data System (ADS)
Güler, Marifi
1999-01-01
Studies by various authors suggest that higher-order networks can be more powerful and are biologically more plausible with respect to the more traditional multilayer networks. These architectures make explicit use of nonlinear interactions between input variables in the form of higher-order units or product units. If it is known a priori that the problem to be implemented possesses a given set of invariances like in the translation, rotation, and scale invariant pattern recognition problems, those invariances can be encoded, thus eliminating all higher-order terms which are incompatible with the invariances. In general, however, it is a serious set-back that the complexity of learning increases exponentially with the size of inputs. This paper reviews higher-order networks and introduces an implicit representation in which learning complexity is mainly decided by the number of higher-order terms to be learned and increases only linearly with the input size.
NASA Astrophysics Data System (ADS)
Hale, R. L.; Grimm, N. B.; Vorosmarty, C. J.
2014-12-01
An ongoing challenge for society is to harness the benefits of phosphorus (P) while minimizing negative effects on downstream ecosystems. To meet this challenge we must understand the controls on the delivery of anthropogenic P from landscapes to downstream ecosystems. We used a model that incorporates P inputs to watersheds, hydrology, and infrastructure (sewers, waste-water treatment plants, and reservoirs) to reconstruct historic P yields for the northeastern U.S. from 1930 to 2002. At the regional scale, increases in P inputs were paralleled by increased fractional retention, thus P loading to the coast did not increase significantly. We found that temporal variation in regional P yield was correlated with P inputs. Spatial patterns of watershed P yields were best predicted by inputs, but the correlation between inputs and yields in space weakened over time, due to infrastructure development. Although the magnitude of infrastructure effect was small, its role changed over time and was important in creating spatial and temporal heterogeneity in input-yield relationships. We then conducted a hierarchical cluster analysis to identify a typology of anthropogenic P cycling, using data on P inputs (fertilizer, livestock feed, and human food), infrastructure (dams, wastewater treatment plants, sewers), and hydrology (runoff coefficient). We identified 6 key types of watersheds that varied significantly in climate, infrastructure, and the types and amounts of P inputs. Annual watershed P yields and retention varied significantly across watershed types. Although land cover varied significantly across typologies, clusters based on land cover alone did not explain P budget patterns, suggesting that this variable is insufficient to understand patterns of P cycling across large spatial scales. Furthermore, clusters varied over time as patterns of climate, P use, and infrastructure changed. Our results demonstrate that the drivers of P cycles are spatially and temporally heterogeneous, yet they also suggest that a relatively simple typology of watersheds can be useful for understanding regional P cycles and may help inform P management approaches.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
Balanced Synaptic Input Shapes the Correlation between Neural Spike Trains
Litwin-Kumar, Ashok; Oswald, Anne-Marie M.; Urban, Nathaniel N.; Doiron, Brent
2011-01-01
Stimulus properties, attention, and behavioral context influence correlations between the spike times produced by a pair of neurons. However, the biophysical mechanisms that modulate these correlations are poorly understood. With a combined theoretical and experimental approach, we show that the rate of balanced excitatory and inhibitory synaptic input modulates the magnitude and timescale of pairwise spike train correlation. High rate synaptic inputs promote spike time synchrony rather than long timescale spike rate correlations, while low rate synaptic inputs produce opposite results. This correlation shaping is due to a combination of enhanced high frequency input transfer and reduced firing rate gain in the high input rate state compared to the low state. Our study extends neural modulation from single neuron responses to population activity, a necessary step in understanding how the dynamics and processing of neural activity change across distinct brain states. PMID:22215995
Spatial interpolation schemes of daily precipitation for hydrologic modeling
Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.
2012-01-01
Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.
Seshagiri, Chandran V.; Delgutte, Bertrand
2007-01-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives. PMID:17671101
Seshagiri, Chandran V; Delgutte, Bertrand
2007-10-01
The complex anatomical structure of the central nucleus of the inferior colliculus (ICC), the principal auditory nucleus in the midbrain, may provide the basis for functional organization of auditory information. To investigate this organization, we used tetrodes to record from neighboring neurons in the ICC of anesthetized cats and studied the similarity and difference among the responses of these neurons to pure-tone stimuli using widely used physiological characterizations. Consistent with the tonotopic arrangement of neurons in the ICC and reports of a threshold map, we found a high degree of correlation in the best frequencies (BFs) of neighboring neurons, which were mostly <3 kHz in our sample, and the pure-tone thresholds among neighboring neurons. However, width of frequency tuning, shapes of the frequency response areas, and temporal discharge patterns showed little or no correlation among neighboring neurons. Because the BF and threshold are measured at levels near the threshold and the characteristic frequency (CF), neighboring neurons may receive similar primary inputs tuned to their CF; however, at higher levels, additional inputs from other frequency channels may be recruited, introducing greater variability in the responses. There was also no correlation among neighboring neurons' sensitivity to interaural time differences (ITD) measured with binaural beats. However, the characteristic phases (CPs) of neighboring neurons revealed a significant correlation. Because the CP is related to the neural mechanisms generating the ITD sensitivity, this result is consistent with segregation of inputs to the ICC from the lateral and medial superior olives.
Laminar Organization of Attentional Modulation in Macaque Visual Area V4.
Nandy, Anirvan S; Nassi, Jonathan J; Reynolds, John H
2017-01-04
Attention is critical to perception, serving to select behaviorally relevant information for privileged processing. To understand the neural mechanisms of attention, we must discern how attentional modulation varies by cell type and across cortical layers. Here, we test whether attention acts non-selectively across cortical layers or whether it engages the laminar circuit in specific and selective ways. We find layer- and cell-class-specific differences in several different forms of attentional modulation in area V4. Broad-spiking neurons in the superficial layers exhibit attention-mediated increases in firing rate and decreases in variability. Spike count correlations are highest in the input layer and attention serves to reduce these correlations. Superficial and input layer neurons exhibit attention-dependent decreases in low-frequency (<10 Hz) coherence, but deep layer neurons exhibit increases in coherence in the beta and gamma frequency ranges. Our study provides a template for attention-mediated laminar information processing that might be applicable across sensory modalities. Copyright © 2017 Elsevier Inc. All rights reserved.
A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.
Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P
2014-12-15
Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.
Yang, Jian-Feng; Zhao, Zhen-Hua; Zhang, Yu; Zhao, Li; Yang, Li-Ming; Zhang, Min-Ming; Wang, Bo-Yin; Wang, Ting; Lu, Bao-Chun
2016-04-07
To investigate the feasibility of a dual-input two-compartment tracer kinetic model for evaluating tumorous microvascular properties in advanced hepatocellular carcinoma (HCC). From January 2014 to April 2015, we prospectively measured and analyzed pharmacokinetic parameters [transfer constant (Ktrans), plasma flow (Fp), permeability surface area product (PS), efflux rate constant (kep), extravascular extracellular space volume ratio (ve), blood plasma volume ratio (vp), and hepatic perfusion index (HPI)] using dual-input two-compartment tracer kinetic models [a dual-input extended Tofts model and a dual-input 2-compartment exchange model (2CXM)] in 28 consecutive HCC patients. A well-known consensus that HCC is a hypervascular tumor supplied by the hepatic artery and the portal vein was used as a reference standard. A paired Student's t-test and a nonparametric paired Wilcoxon rank sum test were used to compare the equivalent pharmacokinetic parameters derived from the two models, and Pearson correlation analysis was also applied to observe the correlations among all equivalent parameters. The tumor size and pharmacokinetic parameters were tested by Pearson correlation analysis, while correlations among stage, tumor size and all pharmacokinetic parameters were assessed by Spearman correlation analysis. The Fp value was greater than the PS value (FP = 1.07 mL/mL per minute, PS = 0.19 mL/mL per minute) in the dual-input 2CXM; HPI was 0.66 and 0.63 in the dual-input extended Tofts model and the dual-input 2CXM, respectively. There were no significant differences in the kep, vp, or HPI between the dual-input extended Tofts model and the dual-input 2CXM (P = 0.524, 0.569, and 0.622, respectively). All equivalent pharmacokinetic parameters, except for ve, were correlated in the two dual-input two-compartment pharmacokinetic models; both Fp and PS in the dual-input 2CXM were correlated with Ktrans derived from the dual-input extended Tofts model (P = 0.002, r = 0.566; P = 0.002, r = 0.570); kep, vp, and HPI between the two kinetic models were positively correlated (P = 0.001, r = 0.594; P = 0.0001, r = 0.686; P = 0.04, r = 0.391, respectively). In the dual input extended Tofts model, ve was significantly less than that in the dual input 2CXM (P = 0.004), and no significant correlation was seen between the two tracer kinetic models (P = 0.156, r = 0.276). Neither tumor size nor tumor stage was significantly correlated with any of the pharmacokinetic parameters obtained from the two models (P > 0.05). A dual-input two-compartment pharmacokinetic model (a dual-input extended Tofts model and a dual-input 2CXM) can be used in assessing the microvascular physiopathological properties before the treatment of advanced HCC. The dual-input extended Tofts model may be more stable in measuring the ve; however, the dual-input 2CXM may be more detailed and accurate in measuring microvascular permeability.
Atlas-based automatic measurements of the morphology of the tibiofemoral joint
NASA Astrophysics Data System (ADS)
Brehler, M.; Thawait, G.; Shyr, W.; Ramsay, J.; Siewerdsen, J. H.; Zbijewski, W.
2017-03-01
Purpose: Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce userdependence of the metrics arising from manual identification of the anatomical landmarks. Methods: The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Results: Intra-reader variability as high as 10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. Conclusions: The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
Atlas-based automatic measurements of the morphology of the tibiofemoral joint.
Brehler, M; Thawait, G; Shyr, W; Ramsay, J; Siewerdsen, J H; Zbijewski, W
2017-02-11
Anatomical metrics of the tibiofemoral joint support assessment of joint stability and surgical planning. We propose an automated, atlas-based algorithm to streamline the measurements in 3D images of the joint and reduce user-dependence of the metrics arising from manual identification of the anatomical landmarks. The method is initialized with coarse registrations of a set of atlas images to the fixed input image. The initial registrations are then refined separately for the tibia and femur and the best matching atlas is selected. Finally, the anatomical landmarks of the best matching atlas are transformed onto the input image by deforming a surface model of the atlas to fit the shape of the tibial plateau in the input image (a mesh-to-volume registration). We apply the method to weight-bearing volumetric images of the knee obtained from 23 subjects using an extremity cone-beam CT system. Results of the automated algorithm were compared to an expert radiologist for measurements of Static Alignment (SA), Medial Tibial Slope (MTS) and Lateral Tibial Slope (LTS). Intra-reader variability as high as ~10% for LTS and 7% for MTS (ratio of standard deviation to the mean in repeated measurements) was found for expert radiologist, illustrating the potential benefits of an automated approach in improving the precision of the metrics. The proposed method achieved excellent registration of the atlas mesh to the input volumes. The resulting automated measurements yielded high correlations with expert radiologist, as indicated by correlation coefficients of 0.72 for MTS, 0.8 for LTS, and 0.89 for SA. The automated method for measurement of anatomical metrics of the tibiofemoral joint achieves high correlation with expert radiologist without the need for time consuming and error prone manual selection of landmarks.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Atanassova, Vassia; Sotirova, Evdokia; Doukovska, Lyubka; Bureva, Veselina; Mavrov, Deyan; Tomov, Jivko
2017-01-01
The approach of InterCriteria Analysis (ICA) was applied for the aim of reducing the set of variables on the input of a neural network, taking into account the fact that their large number increases the number of neurons in the network, thus making them unusable for hardware implementation. Here, for the first time, with the help of the ICA method, correlations between triples of the input parameters for training of the neural networks were obtained. In this case, we use the approach of ICA for data preprocessing, which may yield reduction of the total time for training the neural networks, hence, the time for the network's processing of data and images. PMID:28874908
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Neuronal Spike Timing Adaptation Described with a Fractional Leaky Integrate-and-Fire Model
Teka, Wondimu; Marinov, Toma M.; Santamaria, Fidel
2014-01-01
The voltage trace of neuronal activities can follow multiple timescale dynamics that arise from correlated membrane conductances. Such processes can result in power-law behavior in which the membrane voltage cannot be characterized with a single time constant. The emergent effect of these membrane correlations is a non-Markovian process that can be modeled with a fractional derivative. A fractional derivative is a non-local process in which the value of the variable is determined by integrating a temporal weighted voltage trace, also called the memory trace. Here we developed and analyzed a fractional leaky integrate-and-fire model in which the exponent of the fractional derivative can vary from 0 to 1, with 1 representing the normal derivative. As the exponent of the fractional derivative decreases, the weights of the voltage trace increase. Thus, the value of the voltage is increasingly correlated with the trajectory of the voltage in the past. By varying only the fractional exponent, our model can reproduce upward and downward spike adaptations found experimentally in neocortical pyramidal cells and tectal neurons in vitro. The model also produces spikes with longer first-spike latency and high inter-spike variability with power-law distribution. We further analyze spike adaptation and the responses to noisy and oscillatory input. The fractional model generates reliable spike patterns in response to noisy input. Overall, the spiking activity of the fractional leaky integrate-and-fire model deviates from the spiking activity of the Markovian model and reflects the temporal accumulated intrinsic membrane dynamics that affect the response of the neuron to external stimulation. PMID:24675903
NASA Astrophysics Data System (ADS)
Rovinelli, Andrea; Guilhem, Yoann; Proudhon, Henry; Lebensohn, Ricardo A.; Ludwig, Wolfgang; Sangid, Michael D.
2017-06-01
Microstructurally small cracks exhibit large variability in their fatigue crack growth rate. It is accepted that the inherent variability in microstructural features is related to the uncertainty in the growth rate. However, due to (i) the lack of cycle-by-cycle experimental data, (ii) the complexity of the short crack growth phenomenon, and (iii) the incomplete physics of constitutive relationships, only empirical damage metrics have been postulated to describe the short crack driving force metric (SCDFM) at the mesoscale level. The identification of the SCDFM of polycrystalline engineering alloys is a critical need, in order to achieve more reliable fatigue life prediction and improve material design. In this work, the first steps in the development of a general probabilistic framework are presented, which uses experimental result as an input, retrieves missing experimental data through crystal plasticity (CP) simulations, and extracts correlations utilizing machine learning and Bayesian networks (BNs). More precisely, experimental results representing cycle-by-cycle data of a short crack growing through a beta-metastable titanium alloy, VST-55531, have been acquired via phase and diffraction contrast tomography. These results serve as an input for FFT-based CP simulations, which provide the micromechanical fields influenced by the presence of the crack, complementing the information available from the experiment. In order to assess the correlation between postulated SCDFM and experimental observations, the data is mined and analyzed utilizing BNs. Results show the ability of the framework to autonomously capture relevant correlations and the equivalence in the prediction capability of different postulated SCDFMs for the high cycle fatigue regime.
STDP allows fast rate-modulated coding with Poisson-like spike trains.
Gilson, Matthieu; Masquelier, Timothée; Hugues, Etienne
2011-10-01
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (~10-20 ms) for sufficiently many inputs (~100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks.
STDP Allows Fast Rate-Modulated Coding with Poisson-Like Spike Trains
Hugues, Etienne
2011-01-01
Spike timing-dependent plasticity (STDP) has been shown to enable single neurons to detect repeatedly presented spatiotemporal spike patterns. This holds even when such patterns are embedded in equally dense random spiking activity, that is, in the absence of external reference times such as a stimulus onset. Here we demonstrate, both analytically and numerically, that STDP can also learn repeating rate-modulated patterns, which have received more experimental evidence, for example, through post-stimulus time histograms (PSTHs). Each input spike train is generated from a rate function using a stochastic sampling mechanism, chosen to be an inhomogeneous Poisson process here. Learning is feasible provided significant covarying rate modulations occur within the typical timescale of STDP (∼10–20 ms) for sufficiently many inputs (∼100 among 1000 in our simulations), a condition that is met by many experimental PSTHs. Repeated pattern presentations induce spike-time correlations that are captured by STDP. Despite imprecise input spike times and even variable spike counts, a single trained neuron robustly detects the pattern just a few milliseconds after its presentation. Therefore, temporal imprecision and Poisson-like firing variability are not an obstacle to fast temporal coding. STDP provides an appealing mechanism to learn such rate patterns, which, beyond sensory processing, may also be involved in many cognitive tasks. PMID:22046113
Optimization of a GO2/GH2 Impinging Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
2001-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.
NASA Astrophysics Data System (ADS)
Afkhamipour, Morteza; Mofarahi, Masoud; Borhani, Tohid Nejad Ghaffar; Zanganeh, Masoud
2018-03-01
In this study, artificial neural network (ANN) and thermodynamic models were developed for prediction of the heat capacity ( C P ) of amine-based solvents. For ANN model, independent variables such as concentration, temperature, molecular weight and CO2 loading of amine were selected as the inputs of the model. The significance of the input variables of the ANN model on the C P values was investigated statistically by analyzing of correlation matrix. A thermodynamic model based on the Redlich-Kister equation was used to correlate the excess molar heat capacity ({C}_P^E) data as function of temperature. In addition, the effects of temperature and CO2 loading at different concentrations of conventional amines on the C P values were investigated. Both models were validated against experimental data and very good results were obtained between two mentioned models and experimental data of C P collected from various literatures. The AARD between ANN model results and experimental data of C P for 47 systems of amine-based solvents studied was 4.3%. For conventional amines, the AARD for ANN model and thermodynamic model in comparison with experimental data were 0.59% and 0.57%, respectively. The results showed that both ANN and Redlich-Kister models can be used as a practical tool for simulation and designing of CO2 removal processes by using amine solutions.
Nonmonotonic spatial structure of interneuronal correlations in prefrontal microcircuits
Safavi, Shervin; Dwarakanath, Abhilash; Kapoor, Vishal; Werner, Joachim; Hatsopoulos, Nicholas G.; Logothetis, Nikos K.; Panagiotaropoulos, Theofanis I.
2018-01-01
Correlated fluctuations of single neuron discharges, on a mesoscopic scale, decrease as a function of lateral distance in early sensory cortices, reflecting a rapid spatial decay of lateral connection probability and excitation. However, spatial periodicities in horizontal connectivity and associational input as well as an enhanced probability of lateral excitatory connections in the association cortex could theoretically result in nonmonotonic correlation structures. Here, we show such a spatially nonmonotonic correlation structure, characterized by significantly positive long-range correlations, in the inferior convexity of the macaque prefrontal cortex. This functional connectivity kernel was more pronounced during wakefulness than anesthesia and could be largely attributed to the spatial pattern of correlated variability between functionally similar neurons during structured visual stimulation. These results suggest that the spatial decay of lateral functional connectivity is not a common organizational principle of neocortical microcircuits. A nonmonotonic correlation structure could reflect a critical topological feature of prefrontal microcircuits, facilitating their role in integrative processes. PMID:29588415
Inferring Single Neuron Properties in Conductance Based Balanced Networks
Pool, Román Rossi; Mato, Germán
2011-01-01
Balanced states in large networks are a usual hypothesis for explaining the variability of neural activity in cortical systems. In this regime the statistics of the inputs is characterized by static and dynamic fluctuations. The dynamic fluctuations have a Gaussian distribution. Such statistics allows to use reverse correlation methods, by recording synaptic inputs and the spike trains of ongoing spontaneous activity without any additional input. By using this method, properties of the single neuron dynamics that are masked by the balanced state can be quantified. To show the feasibility of this approach we apply it to large networks of conductance based neurons. The networks are classified as Type I or Type II according to the bifurcations which neurons of the different populations undergo near the firing onset. We also analyze mixed networks, in which each population has a mixture of different neuronal types. We determine under which conditions the intrinsic noise generated by the network can be used to apply reverse correlation methods. We find that under realistic conditions we can ascertain with low error the types of neurons present in the network. We also find that data from neurons with similar firing rates can be combined to perform covariance analysis. We compare the results of these methods (that do not requite any external input) to the standard procedure (that requires the injection of Gaussian noise into a single neuron). We find a good agreement between the two procedures. PMID:22016730
NASA Astrophysics Data System (ADS)
Frossard, E.; Buchmann, N.; Bünemann, E. K.; Kiba, D. I.; Lompo, F.; Oberson, A.; Tamburini, F.; Traoré, O. Y. A.
2015-09-01
Stoichiometric approaches have been applied to understand the relationship between soil organic matter dynamics and biological nutrient transformations. However, very few studies explicitly considered the effects of agricultural management practices on soil C : N : P ratio. The aim of this study was to assess how different input types and rates would affect the C : N : P molar ratios of bulk soil, organic matter and microbial biomass in cropped soils in the long-term. Thus, we analysed the C, N and P inputs and budgets as well as soil properties in three long-term experiments established on different soil types: the Saria soil fertility trial (Burkina Faso), the Wagga Wagga rotation/stubble management/soil preparation trial (Australia), and the DOK cropping system trial (Switzerland). In each of these trials, there was a large range of C, N and P inputs which had a strong impact on element concentrations in soils. However, although C : N : P ratios of the inputs were highly variable, they had only weak effects on soil C : N : P ratios. At Saria, a positive correlation was found between the N : P ratio of inputs and microbial biomass, while no relation was observed between the nutrient ratios of inputs and soil organic matter. At Wagga Wagga, the C : P ratio of inputs was significantly correlated to total soil C : P, N : P and C : N ratios, but had no impact on the elemental composition of microbial biomass. In the DOK trial, a positive correlation was found between the C budget and the C to organic P ratio in soils, while the nutrient ratios of inputs were not related to those in the microbial biomass. We argue that these responses are due to differences in soil properties among sites. At Saria, the soil is dominated by quartz and some kaolinite, has a coarse texture, a fragile structure and a low nutrient content. Thus, microorganisms feed on inputs (plant residues, manure). In contrast, the soil at Wagga Wagga contains illite and haematite, is richer in clay and nutrients and has a stable structure. Thus, organic matter is protected from mineralization and can therefore accumulate, allowing microorganisms to feed on soil nutrients and to keep a constant C : N : P ratio. The DOK soil represents an intermediate situation, with high nutrient concentrations, but a rather fragile soil structure, where organic matter does not accumulate. We conclude that the study of C, N, and P ratios is important to understand the functioning of cropped soils in the long-term, but that it must be coupled with a precise assessment of element inputs and budgets in the system and a good understanding of the ability of soils to stabilize C, N and P compounds.
Marken, Richard S; Horth, Brittany
2011-06-01
Experimental research in psychology is based on an open-loop causal model which assumes that sensory input causes behavioral output. This model was tested in a tracking experiment where participants were asked to control a cursor, keeping it aligned with a target by moving a mouse to compensate for disturbances of differing difficulty. Since cursor movements (inputs) are the only observable cause of mouse movements (outputs), the open-loop model predicts that there will be a correlation between input and output that increases as tracking performance improves. In fact, the correlation between sensory input and motor output is very low regardless of the quality of tracking performance; causality, in terms of the effect of input on output, does not seem to imply correlation in this situation. This surprising result can be explained by a closed-loop model which assumes that input is causing output while output is causing input.
Non-local boxes and their implementation in Minecraft
NASA Astrophysics Data System (ADS)
Simnacher, Timo Yannick
PR-boxes are binary devices connecting two remote parties satisfying x AND y = a + b mod 2, where x and y denote the binary inputs and a and b are the respective outcomes without signaling. These devices are named after their inventors Sandu Popescu and Daniel Rohrlich and saturate the Clauser-Horne-Shimony-Holt (CHSH) inequality. This Bell-like inequality bounds the correlation that can exist between two remote, non-signaling, classical systems described by local hidden variable theories. Experiments have now convincingly shown that quantum entanglement cannot be explained by local hidden variable theories. Furthermore, the CHSH inequality provides a method to distinguish quantum systems from super-quantum correlations. The correlation between the outputs of the PR-box goes beyond any quantum entanglement. Though PR-boxes would have impressive consequences, as far as we know they are not physically realizable. However, by introducing PR-boxes to Minecraft as part of the redstone system, which simulates the electrical components for binary computing, we can experience the consequences of super-quantum correlations. For instance, Wim van Dam proved that two parties can use a sufficient number of PR-boxes to compute any Boolean function f(x,y) with only one bit of communication.
Image scale measurement with correlation filters in a volume holographic optical correlator
NASA Astrophysics Data System (ADS)
Zheng, Tianxiang; Cao, Liangcai; He, Qingsheng; Jin, Guofan
2013-08-01
A search engine containing various target images or different part of a large scene area is of great use for many applications, including object detection, biometric recognition, and image registration. The input image captured in realtime is compared with all the template images in the search engine. A volume holographic correlator is one type of these search engines. It performs thousands of comparisons among the images at a super high speed, with the correlation task accomplishing mainly in optics. However, the inputted target image always contains scale variation to the filtering template images. At the time, the correlation values cannot properly reflect the similarity of the images. It is essential to estimate and eliminate the scale variation of the inputted target image. There are three domains for performing the scale measurement, as spatial, spectral and time domains. Most methods dealing with the scale factor are based on the spatial or the spectral domains. In this paper, a method with the time domain is proposed to measure the scale factor of the input image. It is called a time-sequential scaled method. The method utilizes the relationship between the scale variation and the correlation value of two images. It sends a few artificially scaled input images to compare with the template images. The correlation value increases and decreases with the increasing of the scale factor at the intervals of 0.8~1 and 1~1.2, respectively. The original scale of the input image can be measured by estimating the largest correlation value through correlating the artificially scaled input image with the template images. The measurement range for the scale can be 0.8~4.8. Scale factor beyond 1.2 is measured by scaling the input image at the factor of 1/2, 1/3 and 1/4, correlating the artificially scaled input image with the template images, and estimating the new corresponding scale factor inside 0.8~1.2.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
NASA Astrophysics Data System (ADS)
Manu, D. S.; Thalla, Arun Kumar
2017-11-01
The current work demonstrates the support vector machine (SVM) and adaptive neuro-fuzzy inference system (ANFIS) modeling to assess the removal efficiency of Kjeldahl Nitrogen of a full-scale aerobic biological wastewater treatment plant. The influent variables such as pH, chemical oxygen demand, total solids (TS), free ammonia, ammonia nitrogen and Kjeldahl Nitrogen are used as input variables during modeling. Model development focused on postulating an adaptive, functional, real-time and alternative approach for modeling the removal efficiency of Kjeldahl Nitrogen. The input variables used for modeling were daily time series data recorded at wastewater treatment plant (WWTP) located in Mangalore during the period June 2014-September 2014. The performance of ANFIS model developed using Gbell and trapezoidal membership functions (MFs) and SVM are assessed using different statistical indices like root mean square error, correlation coefficients (CC) and Nash Sutcliff error (NSE). The errors related to the prediction of effluent Kjeldahl Nitrogen concentration by the SVM modeling appeared to be reasonable when compared to that of ANFIS models with Gbell and trapezoidal MF. From the performance evaluation of the developed SVM model, it is observed that the approach is capable to define the inter-relationship between various wastewater quality variables and thus SVM can be potentially applied for evaluating the efficiency of aerobic biological processes in WWTP.
Multi-level emulation of complex climate model responses to boundary forcing data
NASA Astrophysics Data System (ADS)
Tran, Giang T.; Oliver, Kevin I. C.; Holden, Philip B.; Edwards, Neil R.; Sóbester, András; Challenor, Peter
2018-04-01
Climate model components involve both high-dimensional input and output fields. It is desirable to efficiently generate spatio-temporal outputs of these models for applications in integrated assessment modelling or to assess the statistical relationship between such sets of inputs and outputs, for example, uncertainty analysis. However, the need for efficiency often compromises the fidelity of output through the use of low complexity models. Here, we develop a technique which combines statistical emulation with a dimensionality reduction technique to emulate a wide range of outputs from an atmospheric general circulation model, PLASIM, as functions of the boundary forcing prescribed by the ocean component of a lower complexity climate model, GENIE-1. Although accurate and detailed spatial information on atmospheric variables such as precipitation and wind speed is well beyond the capability of GENIE-1's energy-moisture balance model of the atmosphere, this study demonstrates that the output of this model is useful in predicting PLASIM's spatio-temporal fields through multi-level emulation. Meaningful information from the fast model, GENIE-1 was extracted by utilising the correlation between variables of the same type in the two models and between variables of different types in PLASIM. We present here the construction and validation of several PLASIM variable emulators and discuss their potential use in developing a hybrid model with statistical components.
Variability of visual responses of superior colliculus neurons depends on stimulus velocity.
Mochol, Gabriela; Wójcik, Daniel K; Wypych, Marek; Wróbel, Andrzej; Waleszczyk, Wioletta J
2010-03-03
Visually responding neurons in the superficial, retinorecipient layers of the cat superior colliculus receive input from two primarily parallel information processing channels, Y and W, which is reflected in their velocity response profiles. We quantified the time-dependent variability of responses of these neurons to stimuli moving with different velocities by Fano factor (FF) calculated in discrete time windows. The FF for cells responding to low-velocity stimuli, thus receiving W inputs, increased with the increase in the firing rate. In contrast, the dynamics of activity of the cells responding to fast moving stimuli, processed by Y pathway, correlated negatively with FF whether the response was excitatory or suppressive. These observations were tested against several types of surrogate data. Whereas Poisson description failed to reproduce the variability of all collicular responses, the inclusion of secondary structure to the generating point process recovered most of the observed features of responses to fast moving stimuli. Neither model could reproduce the variability of low-velocity responses, which suggests that, in this case, more complex time dependencies need to be taken into account. Our results indicate that Y and W channels may differ in reliability of responses to visual stimulation. Apart from previously reported morphological and physiological differences of the cells belonging to Y and W channels, this is a new feature distinguishing these two pathways.
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
A Multifactor Approach to Research in Instructional Technology.
ERIC Educational Resources Information Center
Ragan, Tillman J.
In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Makhno, D. E.; Fedotov, K. V.
2018-03-01
The paper aims to analyze vibrations of the dynamic system equivalent of the suspension system with regard to tyre ability to smooth road irregularities. The research is based on static dynamics for linear systems of automated control, methods of correlation, spectral and numerical analysis. Input of new data on the smoothing effect of the pneumatic tyre reflecting changes of a contact area between the wheel and road under vibrations of the suspension makes the system non-linear which requires using numerical analysis methods. Taking into account the variable smoothing ability of the tyre when calculating suspension vibrations, one can approximate calculation and experimental results and improve the constant smoothing ability of the tyre.
NASA Astrophysics Data System (ADS)
Jian, S.; Li, J.; Guo, C.; Hui, D.; Deng, Q.; Yu, C. L.; Dzantor, K. E.; Lane, C.
2017-12-01
Nitrogen (N) fertilizers are widely used to increase bioenergy crop yield but intensive fertilizations on spatial distributions of soil microbial processes in bioenergy croplands remains unknown. To quantify N fertilization effect on spatial heterogeneity of soil microbial biomass carbon (MBC) and N (MBN), we sampled top mineral horizon soils (0-15cm) using a spatially explicit design within two 15-m2 plots under three fertilization treatments in two bioenergy croplands in a three-year long fertilization experiment in Middle Tennessee, USA. The three fertilization treatments were no N input (NN), low N input (LN: 84 kg N ha-1 in urea) and high N input (HN: 168 kg N ha-1 in urea). The two crops were switchgrass (SG: Panicum virgatum L.) and gamagrass (GG: Tripsacum dactyloides L.). Results showed that N fertilizations little altered central tendencies of microbial variables but relative to LN, HN significantly increased MBC and MBC:MBN (GG only). HN possessed the greatest within-plot variances except for MBN (GG only). Spatial patterns were generally evident under HN and LN plots and much less so under NN plots. Substantially contrasting spatial variations were also identified between croplands (GG>SG) and among variables (MBN, MBC:MBN > MBC). No significant correlations were identified between soil pH and microbial variables. This study demonstrated that spatial heterogeneity is elevated in microbial biomass of fertilized soils likely by uneven fertilizer application, the nature of soil microbial communities and bioenergy crops. Future researchers should better match sample sizes with the heterogeneity of soil microbial property (i.e. MBN) in bioenergy croplands.
Training feed-forward neural networks with gain constraints
Hartman
2000-04-01
Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.
Joint statistics of strongly correlated neurons via dimensionality reduction
NASA Astrophysics Data System (ADS)
Deniz, Taşkın; Rotter, Stefan
2017-06-01
The relative timing of action potentials in neurons recorded from local cortical networks often shows a non-trivial dependence, which is then quantified by cross-correlation functions. Theoretical models emphasize that such spike train correlations are an inevitable consequence of two neurons being part of the same network and sharing some synaptic input. For non-linear neuron models, however, explicit correlation functions are difficult to compute analytically, and perturbative methods work only for weak shared input. In order to treat strong correlations, we suggest here an alternative non-perturbative method. Specifically, we study the case of two leaky integrate-and-fire neurons with strong shared input. Correlation functions derived from simulated spike trains fit our theoretical predictions very accurately. Using our method, we computed the non-linear correlation transfer as well as correlation functions that are asymmetric due to inhomogeneous intrinsic parameters or unequal input.
NASA Technical Reports Server (NTRS)
Cotariu, Steven S.
1991-01-01
Pattern recognition may supplement or replace certain navigational aids on spacecraft in docking or landing activities. The need to correctly identify terrain features remains critical in preparation of autonomous planetary landing. One technique that may solve this problem is optical correlation. Correlation has been successfully demonstrated under ideal conditions; however, noise significantly affects the ability of the correlator to accurately identify input signals. Optical correlation in the presence of noise must be successfully demonstrated before this technology can be incorporated into system design. An optical correlator is designed and constructed using a modified 2f configuration. Liquid crystal televisions (LCTV) are used as the spatial light modulators (SLM) for both the input and filter devices. The filter LCTV is characterized and an operating curve is developed. Determination of this operating curve is critical for reduction of input noise. Correlation of live input with a programmable filter is demonstrated.
NASA Astrophysics Data System (ADS)
Cotariu, Steven S.
1991-12-01
Pattern recognition may supplement or replace certain navigational aids on spacecraft in docking or landing activities. The need to correctly identify terrain features remains critical in preparation of autonomous planetary landing. One technique that may solve this problem is optical correlation. Correlation has been successfully demonstrated under ideal conditions; however, noise significantly affects the ability of the correlator to accurately identify input signals. Optical correlation in the presence of noise must be successfully demonstrated before this technology can be incorporated into system design. An optical correlator is designed and constructed using a modified 2f configuration. Liquid crystal televisions (LCTV) are used as the spatial light modulators (SLM) for both the input and filter devices. The filter LCTV is characterized and an operating curve is developed. Determination of this operating curve is critical for reduction of input noise. Correlation of live input with a programmable filter is demonstrated.
Noise Suppression and Surplus Synchrony by Coincidence Detection
Schultze-Kraft, Matthias; Diesmann, Markus; Grün, Sonja; Helias, Moritz
2013-01-01
The functional significance of correlations between action potentials of neurons is still a matter of vivid debate. In particular, it is presently unclear how much synchrony is caused by afferent synchronized events and how much is intrinsic due to the connectivity structure of cortex. The available analytical approaches based on the diffusion approximation do not allow to model spike synchrony, preventing a thorough analysis. Here we theoretically investigate to what extent common synaptic afferents and synchronized inputs each contribute to correlated spiking on a fine temporal scale between pairs of neurons. We employ direct simulation and extend earlier analytical methods based on the diffusion approximation to pulse-coupling, allowing us to introduce precisely timed correlations in the spiking activity of the synaptic afferents. We investigate the transmission of correlated synaptic input currents by pairs of integrate-and-fire model neurons, so that the same input covariance can be realized by common inputs or by spiking synchrony. We identify two distinct regimes: In the limit of low correlation linear perturbation theory accurately determines the correlation transmission coefficient, which is typically smaller than unity, but increases sensitively even for weakly synchronous inputs. In the limit of high input correlation, in the presence of synchrony, a qualitatively new picture arises. As the non-linear neuronal response becomes dominant, the output correlation becomes higher than the total correlation in the input. This transmission coefficient larger unity is a direct consequence of non-linear neural processing in the presence of noise, elucidating how synchrony-coded signals benefit from these generic properties present in cortical networks. PMID:23592953
NASA Technical Reports Server (NTRS)
Avni, R.; Carmi, U.; Grill, A.; Manory, R.; Grossman, E.
1984-01-01
The dissociation of chlorosilanes to silicon and its deposition on a solid substrate in a RF plasma of mixtures of argon and hydrogen were investigated as a function of the macrovariables of the plasma. The dissociation mechanism of chlorosilanes and HCl as well as the formation of Si in the plasma state were studied by sampling the plasma with a quadrupole mass spectrometer. Macrovariables such as pressure, net RF power input and locations in the plasma reactor strongly influence the kinetics of dissociation. The deposition process of microcrystalline silicon films and its chlorine contamination were correlated to the dissociation mechanism of chlorosilanes and HCl.
Production Function Geometry with "Knightian" Total Product
ERIC Educational Resources Information Center
Truett, Dale B.; Truett, Lila J.
2007-01-01
Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…
Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.
2017-01-01
Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.
Essential Role of the m2R-RGS6-IKACh Pathway in Controlling Intrinsic Heart Rate Variability
Posokhova, Ekaterina; Ng, David; Opel, Aaisha; Masuho, Ikuo; Tinker, Andrew; Biesecker, Leslie G.; Wickman, Kevin; Martemyanov, Kirill A.
2013-01-01
Normal heart function requires generation of a regular rhythm by sinoatrial pacemaker cells and the alteration of this spontaneous heart rate by the autonomic input to match physiological demand. However, the molecular mechanisms that ensure consistent periodicity of cardiac contractions and fine tuning of this process by autonomic system are not completely understood. Here we examined the contribution of the m2R-IKACh intracellular signaling pathway, which mediates the negative chronotropic effect of parasympathetic stimulation, to the regulation of the cardiac pacemaking rhythm. Using isolated heart preparations and single-cell recordings we show that the m2R-IKACh signaling pathway controls the excitability and firing pattern of the sinoatrial cardiomyocytes and determines variability of cardiac rhythm in a manner independent from the autonomic input. Ablation of the major regulator of this pathway, Rgs6, in mice results in irregular cardiac rhythmicity and increases susceptibility to atrial fibrillation. We further identify several human subjects with variants in the RGS6 gene and show that the loss of function in RGS6 correlates with increased heart rate variability. These findings identify the essential role of the m2R-IKACh signaling pathway in the regulation of cardiac sinus rhythm and implicate RGS6 in arrhythmia pathogenesis. PMID:24204714
NASA Technical Reports Server (NTRS)
Shaposhinikov, Nikolai; Markwardt, Craig; Swank, Jean; Krimm, Hans
2010-01-01
We report on the discovery and monitoring observations of a new galactic black hole candidate XTE J1752-223 by Rossi X-ray Timing Explorer (RXTE). The new source appeared on the X-ray sky on October 21 2009 and was active for almost 8 months. Phenomenologically, the source exhibited the low-hard/highsoft spectral state bi-modality and the variability evolution during the state transition that matches standard behavior expected from a stellar mass black hole binary. We model the energy spectrum throughout the outburst using a generic Comptonization model assuming that part of the input soft radiation in the form of a black body spectrum gets reprocessed in the Comptonizing medium. We follow the evolution of fractional root-mean-square (RMS) variability in the RXTE/PCA energy band with the source spectral state and conclude that broad band variability is strongly correlated with the source hardness (or Comptonized fraction). We follow changes in the energy distribution of rms variability during the low-hard state and the state transition and find further evidence that variable emission is strongly concentrated in the power-law spectral component. We discuss the implication of our results to the Comptonization regimes during different spectral states. Correlations of spectral and variability properties provide measurements of the BH mass and distance to the source. The spectral-timing correlation scaling technique applied to the RXTE observations during the hardto- soft state transition indicates a mass of the BH in XTE J1752-223 between 8 and 11 solar masses and a distance to the source about 3.5 kiloparsec.
Deep Recurrent Neural Networks for Human Activity Recognition
Murad, Abdulmajid
2017-01-01
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs. PMID:29113103
Deep Recurrent Neural Networks for Human Activity Recognition.
Murad, Abdulmajid; Pyun, Jae-Young
2017-11-06
Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.
Characterizing regional soil mineral composition using spectroscopyand geostatistics
Mulder, V.L.; de Bruin, S.; Weyermann, J.; Kokaly, Raymond F.; Schaepman, M.E.
2013-01-01
This work aims at improving the mapping of major mineral variability at regional scale using scale-dependent spatial variability observed in remote sensing data. Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and statistical methods were combined with laboratory-based mineral characterization of field samples to create maps of the distributions of clay, mica and carbonate minerals and their abundances. The Material Identification and Characterization Algorithm (MICA) was used to identify the spectrally-dominant minerals in field samples; these results were combined with ASTER data using multinomial logistic regression to map mineral distributions. X-ray diffraction (XRD)was used to quantify mineral composition in field samples. XRD results were combined with ASTER data using multiple linear regression to map mineral abundances. We testedwhether smoothing of the ASTER data to match the scale of variability of the target sample would improve model correlations. Smoothing was donewith Fixed Rank Kriging (FRK) to represent the mediumand long-range spatial variability in the ASTER data. Stronger correlations resulted using the smoothed data compared to results obtained with the original data. Highest model accuracies came from using both medium and long-range scaled ASTER data as input to the statistical models. High correlation coefficients were obtained for the abundances of calcite and mica (R2 = 0.71 and 0.70, respectively). Moderately-high correlation coefficients were found for smectite and kaolinite (R2 = 0.57 and 0.45, respectively). Maps of mineral distributions, obtained by relating ASTER data to MICA analysis of field samples, were found to characterize major soil mineral variability (overall accuracies for mica, smectite and kaolinite were 76%, 89% and 86% respectively). The results of this study suggest that the distributions of minerals and their abundances derived using FRK-smoothed ASTER data more closely match the spatial variability of soil and environmental properties at regional scale.
NASA Astrophysics Data System (ADS)
Beem-Miller, Jeffrey; Lehmann, Johannes
2017-04-01
The majority of the world's soil organic carbon (OC) stock is stored below 30 cm in depth, yet sampling for soil OC assessment rarely goes below 30 cm. Recent studies suggest that subsoil OC is distinct from topsoil OC in quantity and quality: subsoil OC concentrations are typically much lower and turnover times are much longer, but the mechanisms involved in retention and input of OC to the subsoil are not well understood. Improving our understanding of subsoil OC is essential for balancing the global carbon budget and confronting the challenge of global climate change. This study was undertaken to assess the relationship between OC stock and potential drivers of OC dynamics, including both soil properties and environmental covariates, in topsoil (0 to 30 cm) versus subsoil (30 to 75 cm). The performance of commonly used depth functions in predicting OC stock from 0 to 75 cm was also assessed. Depth functions are a useful tool for extrapolating OC stock below the depth of sampling, but may poorly model "hot spots" of OC accumulation, and be inadequate for modelling the distinct dynamics of topsoil and subsoil OC when applied with a single functional form. We collected two hundred soil cores on an arable Mollisol, sectioned into five depth increments (0-10, 10-20, 20-30, 30-50, and 50-75 cm), and performed the following analyses on each depth increment: concentration of OC, inorganic C, permanganate oxidizable carbon (POXC), and total N, as well as texture, pH, and bulk density; a digital elevation model was used to calculate elevation, slope, curvature, and soil topographic wetness index. We found that topsoil OC stocks were significantly correlated (p < 0.05) with terrain variables, texture, and pH, while subsoil OC stock was only significantly correlated with topsoil OC stock and soil pH. Total OC stock was highly spatially variable, and the relationship between surface soil properties, terrain variables, and subsoil OC stock was spatially variable as well. Hot spots of subsoil OC accumulation were correlated with higher pH (> 7.0), flat topography, a high OC to total N ratio, and a high ratio of POXC to OC. These findings suggest that at this site, topsoil OC stock is input driven, while OC accumulation in the subsoil is retention dominated. Accordingly, a new depth function is proposed that uses a linear relationship to model OC stock in topsoil and a power function to model OC stock in the subsoil. The combined depth function performed better than did negative exponential, power, and linear functions alone.
Tritium Records to Trace Stratospheric Moisture Inputs in Antarctica
NASA Astrophysics Data System (ADS)
Fourré, E.; Landais, A.; Cauquoin, A.; Jean-Baptiste, P.; Lipenkov, V.; Petit, J.-R.
2018-03-01
Better assessing the dynamic of stratosphere-troposphere exchange is a key point to improve our understanding of the climate dynamic in the East Antarctica Plateau, a region where stratospheric inputs are expected to be important. Although tritium (3H or T), a nuclide naturally produced mainly in the stratosphere and rapidly entering the water cycle as HTO, seems a first-rate tracer to study these processes, tritium data are very sparse in this region. We present the first high-resolution measurements of tritium concentration over the last 50 years in three snow pits drilled at the Vostok station. Natural variability of the tritium records reveals two prominent frequencies, one at about 10 years (to be related to the solar Schwabe cycles) and the other one at a shorter periodicity: despite dating uncertainty at this short scale, a good correlation is observed between 3H and Na+ and an anticorrelation between 3H and δ18O measured on an individual pit. The outputs from the LMDZ Atmospheric General Circulation Model including stable water isotopes and tritium show the same 3H-δ18O anticorrelation and allow further investigation on the associated mechanism. At the interannual scale, the modeled 3H variability matches well with the Southern Annular Mode index. At the seasonal scale, we show that modeled stratospheric tritium inputs in the troposphere are favored in winter cold and dry conditions.
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
How well does the Post-fire Erosion Risk Management Tool (ERMiT) really work?
NASA Astrophysics Data System (ADS)
Robichaud, Peter; Elliot, William; Lewis, Sarah; Miller, Mary Ellen
2016-04-01
The decision of where, when, and how to apply the most effective postfire erosion mitigation treatments requires land managers to assess the risk of damaging runoff and erosion events occurring after a fire. The Erosion Risk Management Tool (ERMiT) was developed to assist post fire assessment teams identify high erosion risk areas and effectiveness of various mitigation treatments to reduce that risk. ERMiT is a web-based application that uses the Water Erosion Prediction Project (WEPP) technology to estimate erosion, in probabilistic terms, on burned and recovering forest, range, and chaparral lands with and without the application of mitigation treatments. User inputs are processed by ERMiT to combine rain event variability with spatial and temporal variabilities of hillslope burn severity and soil properties which are then used as WEPP inputs. Since 2007, the model has been used in making hundreds of land management decisions in the US and elsewhere. We use eight published field study sites in the Western US to compare ERMiT predictions to observed hillslope erosion rates. Most sites experience only a few rainfall events that produced runoff and sediment except for a California site with a Mediterranean climate. When hillslope erosion occurred, significant correlations occurred between the observed hillslope erosion and those predicted by ERMiT. Significant correlation occurred for most mitigation treatments as well as the five recovery years. These model validation results suggest reasonable estimates of probabilistic post-fire hillslope sediment delivery when compared to observation.
NASA Astrophysics Data System (ADS)
Frossard, Emmanuel; Buchmann, Nina; Bünemann, Else K.; Kiba, Delwende I.; Lompo, François; Oberson, Astrid; Tamburini, Federica; Traoré, Ouakoltio Y. A.
2016-02-01
Stoichiometric approaches have been applied to understand the relationship between soil organic matter dynamics and biological nutrient transformations. However, very few studies have explicitly considered the effects of agricultural management practices on the soil C : N : P ratio. The aim of this study was to assess how different input types and rates would affect the C : N : P molar ratios of bulk soil, organic matter and microbial biomass in cropped soils in the long term. Thus, we analysed the C, N, and P inputs and budgets as well as soil properties in three long-term experiments established on different soil types: the Saria soil fertility trial (Burkina Faso), the Wagga Wagga rotation/stubble management/soil preparation trial (Australia), and the DOK (bio-Dynamic, bio-Organic, and "Konventionell") cropping system trial (Switzerland). In each of these trials, there was a large range of C, N, and P inputs which had a strong impact on element concentrations in soils. However, although C : N : P ratios of the inputs were highly variable, they had only weak effects on soil C : N : P ratios. At Saria, a positive correlation was found between the N : P ratio of inputs and microbial biomass, while no relation was observed between the nutrient ratios of inputs and soil organic matter. At Wagga Wagga, the C : P ratio of inputs was significantly correlated to total soil C : P, N : P, and C : N ratios, but had no impact on the elemental composition of microbial biomass. In the DOK trial, a positive correlation was found between the C budget and the C to organic P ratio in soils, while the nutrient ratios of inputs were not related to those in the microbial biomass. We argue that these responses are due to differences in soil properties among sites. At Saria, the soil is dominated by quartz and some kaolinite, has a coarse texture, a fragile structure, and a low nutrient content. Thus, microorganisms feed on inputs (plant residues, manure). In contrast, the soil at Wagga Wagga contains illite and haematite, is richer in clay and nutrients, and has a stable structure. Thus, organic matter is protected from mineralization and can therefore accumulate, allowing microorganisms to feed on soil nutrients and to keep a constant C : N : P ratio. The DOK soil represents an intermediate situation, with high nutrient concentrations, but a rather fragile soil structure, where organic matter does not accumulate. We conclude that the study of C, N, and P ratios is important to understand the functioning of cropped soils in the long term, but that it must be coupled with a precise assessment of element inputs and budgets in the system and a good understanding of the ability of soils to stabilize C, N, and P compounds.
2017-01-01
Abstract While a topographic map of auditory space exists in the vertebrate midbrain, it is absent in the forebrain. Yet, both brain regions are implicated in sound localization. The heterogeneous spatial tuning of adjacent sites in the forebrain compared to the midbrain reflects different underlying circuitries, which is expected to affect the correlation structure, i.e., signal (similarity of tuning) and noise (trial-by-trial variability) correlations. Recent studies have drawn attention to the impact of response correlations on the information readout from a neural population. We thus analyzed the correlation structure in midbrain and forebrain regions of the barn owl’s auditory system. Tetrodes were used to record in the midbrain and two forebrain regions, Field L and the downstream auditory arcopallium (AAr), in anesthetized owls. Nearby neurons in the midbrain showed high signal and noise correlations (RNCs), consistent with shared inputs. As previously reported, Field L was arranged in random clusters of similarly tuned neurons. Interestingly, AAr neurons displayed homogeneous monotonic azimuth tuning, while response variability of nearby neurons was significantly less correlated than the midbrain. Using a decoding approach, we demonstrate that low RNC in AAr restricts the potentially detrimental effect it can have on information, assuming a rate code proposed for mammalian sound localization. This study harnesses the power of correlation structure analysis to investigate the coding of auditory space. Our findings demonstrate distinct correlation structures in the auditory midbrain and forebrain, which would be beneficial for a rate-code framework for sound localization in the nontopographic forebrain representation of auditory space. PMID:28674698
Keijsers, Joep G. S.; Poortinga, Ate; Riksen, Michel J. P. M.; Maroulis, Jerry
2014-01-01
Depending on the amount of aeolian sediment input and dune erosion, dune size and morphology change over time. Since coastal foredunes play an important role in the Dutch coastal defence, it is important to have good insight in the main factors that control these changes. In this paper the temporal variations in foredune erosion and accretion were studied in relation to proxies for aeolian transport potential and storminess using yearly elevation measurements from 1965 to 2012 for six sections of the Dutch coast. Longshore differences in the relative impacts of erosion and accretion were examined in relation to local beach width. The results show that temporal variability in foredune accretion and erosion is highest in narrow beach sections. Here, dune erosion alternates with accretion, with variability displaying strong correlations with yearly values of storminess (maximum sea levels). In wider beach sections, dune erosion is less frequent, with lower temporal variability and stronger correlations with time series of transport potential. In erosion dominated years, eroded volumes decrease from narrow to wider beaches. When accretion dominates, dune-volume changes are relatively constant alongshore. Dune erosion is therefore suggested to control spatial variability in dune-volume changes. On a scale of decades, the volume of foredunes tends to increase more on wider beaches. However, where widths exceed 200 to 300 m, this trend is no longer observed. PMID:24603812
Keijsers, Joep G S; Poortinga, Ate; Riksen, Michel J P M; Maroulis, Jerry
2014-01-01
Depending on the amount of aeolian sediment input and dune erosion, dune size and morphology change over time. Since coastal foredunes play an important role in the Dutch coastal defence, it is important to have good insight in the main factors that control these changes. In this paper the temporal variations in foredune erosion and accretion were studied in relation to proxies for aeolian transport potential and storminess using yearly elevation measurements from 1965 to 2012 for six sections of the Dutch coast. Longshore differences in the relative impacts of erosion and accretion were examined in relation to local beach width. The results show that temporal variability in foredune accretion and erosion is highest in narrow beach sections. Here, dune erosion alternates with accretion, with variability displaying strong correlations with yearly values of storminess (maximum sea levels). In wider beach sections, dune erosion is less frequent, with lower temporal variability and stronger correlations with time series of transport potential. In erosion dominated years, eroded volumes decrease from narrow to wider beaches. When accretion dominates, dune-volume changes are relatively constant alongshore. Dune erosion is therefore suggested to control spatial variability in dune-volume changes. On a scale of decades, the volume of foredunes tends to increase more on wider beaches. However, where widths exceed 200 to 300 m, this trend is no longer observed.
Significance of Input Correlations in Striatal Function
Yim, Man Yi; Aertsen, Ad; Kumar, Arvind
2011-01-01
The striatum is the main input station of the basal ganglia and is strongly associated with motor and cognitive functions. Anatomical evidence suggests that individual striatal neurons are unlikely to share their inputs from the cortex. Using a biologically realistic large-scale network model of striatum and cortico-striatal projections, we provide a functional interpretation of the special anatomical structure of these projections. Specifically, we show that weak pairwise correlation within the pool of inputs to individual striatal neurons enhances the saliency of signal representation in the striatum. By contrast, correlations among the input pools of different striatal neurons render the signal representation less distinct from background activity. We suggest that for the network architecture of the striatum, there is a preferred cortico-striatal input configuration for optimal signal representation. It is further enhanced by the low-rate asynchronous background activity in striatum, supported by the balance between feedforward and feedback inhibitions in the striatal network. Thus, an appropriate combination of rates and correlations in the striatal input sets the stage for action selection presumably implemented in the basal ganglia. PMID:22125480
Ortega Cisneros, Kelly; Smit, Albertus J.; Laudien, Jürgen; Schoeman, David S.
2011-01-01
Sandy beach ecological theory states that physical features of the beach control macrobenthic community structure on all but the most dissipative beaches. However, few studies have simultaneously evaluated the relative importance of physical, chemical and biological factors as potential explanatory variables for meso-scale spatio-temporal patterns of intertidal community structure in these systems. Here, we investigate macroinfaunal community structure of a micro-tidal sandy beach that is located on an oligotrophic subtropical coast and is influenced by seasonal estuarine input. We repeatedly sampled biological and environmental variables at a series of beach transects arranged at increasing distances from the estuary mouth. Sampling took place over a period of five months, corresponding with the transition between the dry and wet season. This allowed assessment of biological-physical relationships across chemical and nutritional gradients associated with a range of estuarine inputs. Physical, chemical, and biological response variables, as well as measures of community structure, showed significant spatio-temporal patterns. In general, bivariate relationships between biological and environmental variables were rare and weak. However, multivariate correlation approaches identified a variety of environmental variables (i.e., sampling session, the C∶N ratio of particulate organic matter, dissolved inorganic nutrient concentrations, various size fractions of photopigment concentrations, salinity and, to a lesser extent, beach width and sediment kurtosis) that either alone or combined provided significant explanatory power for spatio-temporal patterns of macroinfaunal community structure. Overall, these results showed that the macrobenthic community on Mtunzini Beach was not structured primarily by physical factors, but instead by a complex and dynamic blend of nutritional, chemical and physical drivers. This emphasises the need to recognise ocean-exposed sandy beaches as functional ecosystems in their own right. PMID:21858213
Ortega Cisneros, Kelly; Smit, Albertus J; Laudien, Jürgen; Schoeman, David S
2011-01-01
Sandy beach ecological theory states that physical features of the beach control macrobenthic community structure on all but the most dissipative beaches. However, few studies have simultaneously evaluated the relative importance of physical, chemical and biological factors as potential explanatory variables for meso-scale spatio-temporal patterns of intertidal community structure in these systems. Here, we investigate macroinfaunal community structure of a micro-tidal sandy beach that is located on an oligotrophic subtropical coast and is influenced by seasonal estuarine input. We repeatedly sampled biological and environmental variables at a series of beach transects arranged at increasing distances from the estuary mouth. Sampling took place over a period of five months, corresponding with the transition between the dry and wet season. This allowed assessment of biological-physical relationships across chemical and nutritional gradients associated with a range of estuarine inputs. Physical, chemical, and biological response variables, as well as measures of community structure, showed significant spatio-temporal patterns. In general, bivariate relationships between biological and environmental variables were rare and weak. However, multivariate correlation approaches identified a variety of environmental variables (i.e., sampling session, the C∶N ratio of particulate organic matter, dissolved inorganic nutrient concentrations, various size fractions of photopigment concentrations, salinity and, to a lesser extent, beach width and sediment kurtosis) that either alone or combined provided significant explanatory power for spatio-temporal patterns of macroinfaunal community structure. Overall, these results showed that the macrobenthic community on Mtunzini Beach was not structured primarily by physical factors, but instead by a complex and dynamic blend of nutritional, chemical and physical drivers. This emphasises the need to recognise ocean-exposed sandy beaches as functional ecosystems in their own right.
Landscape structure and climate influences on hydrologic response
NASA Astrophysics Data System (ADS)
Nippgen, Fabian; McGlynn, Brian L.; Marshall, Lucy A.; Emanuel, Ryan E.
2011-12-01
Climate variability and catchment structure (topography, geology, vegetation) have a significant influence on the timing and quantity of water discharged from mountainous catchments. How these factors combine to influence runoff dynamics is poorly understood. In this study we linked differences in hydrologic response across catchments and across years to metrics of landscape structure and climate using a simple transfer function rainfall-runoff modeling approach. A transfer function represents the internal catchment properties that convert a measured input (rainfall/snowmelt) into an output (streamflow). We examined modeled mean response time, defined as the average time that it takes for a water input to leave the catchment outlet from the moment it reaches the ground surface. We combined 12 years of precipitation and streamflow data from seven catchments in the Tenderfoot Creek Experimental Forest (Little Belt Mountains, southwestern Montana) with landscape analyses to quantify the first-order controls on mean response times. Differences between responses across the seven catchments were related to the spatial variability in catchment structure (e.g., slope, flowpath lengths, tree height). Annual variability was largely a function of maximum snow water equivalent. Catchment averaged runoff ratios exhibited strong correlations with mean response time while annually averaged runoff ratios were not related to climatic metrics. These results suggest that runoff ratios in snowmelt dominated systems are mainly controlled by topography and not by climatic variability. This approach provides a simple tool for assessing differences in hydrologic response across diverse watersheds and climate conditions.
Stochastic Earthquake Rupture Modeling Using Nonparametric Co-Regionalization
NASA Astrophysics Data System (ADS)
Lee, Kyungbook; Song, Seok Goo
2017-09-01
Accurate predictions of the intensity and variability of ground motions are essential in simulation-based seismic hazard assessment. Advanced simulation-based ground motion prediction methods have been proposed to complement the empirical approach, which suffers from the lack of observed ground motion data, especially in the near-source region for large events. It is important to quantify the variability of the earthquake rupture process for future events and to produce a number of rupture scenario models to capture the variability in simulation-based ground motion predictions. In this study, we improved the previously developed stochastic earthquake rupture modeling method by applying the nonparametric co-regionalization, which was proposed in geostatistics, to the correlation models estimated from dynamically derived earthquake rupture models. The nonparametric approach adopted in this study is computationally efficient and, therefore, enables us to simulate numerous rupture scenarios, including large events ( M > 7.0). It also gives us an opportunity to check the shape of true input correlation models in stochastic modeling after being deformed for permissibility. We expect that this type of modeling will improve our ability to simulate a wide range of rupture scenario models and thereby predict ground motions and perform seismic hazard assessment more accurately.
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
Nanoscale interplay of strain and doping in a high-temperature superconductor
Zeljkovic, Ilija; Gu, Genda; Nieminen, Jouko; ...
2014-11-07
The highest temperature superconductors are electronically inhomogeneous at the nanoscale, suggesting the existence of a local variable which could be harnessed to enhance the superconducting pairing. Here we report the relationship between local doping and local strain in the cuprate superconductor Bi₂Sr₂CaCu₂O₈₊ x. We use scanning tunneling microscopy to discover that the crucial oxygen dopants are periodically distributed, in correlation with local strain. Our picoscale investigation of the intra-unit-cell positions of all oxygen dopants provides essential structural input for a complete microscopic theory.
Ryberg, Karen R.; Blomquist, Joel; Sprague, Lori A.; Sekellick, Andrew J.; Keisman, Jennifer
2018-01-01
Causal attribution of changes in water quality often consists of correlation, qualitative reasoning, listing references to the work of others, or speculation. To better support statements of attribution for water-quality trends, structural equation modeling was used to model the causal factors of total phosphorus loads in the Chesapeake Bay watershed. By transforming, scaling, and standardizing variables, grouping similar sites, grouping some causal factors into latent variable models, and using methods that correct for assumption violations, we developed a structural equation model to show how causal factors interact to produce total phosphorus loads. Climate (in the form of annual total precipitation and the Palmer Hydrologic Drought Index) and anthropogenic inputs are the major drivers of total phosphorus load in the Chesapeake Bay watershed. Increasing runoff due to natural climate variability is offsetting purposeful management actions that are otherwise decreasing phosphorus loading; consequently, management actions may need to be reexamined to achieve target reductions in the face of climate variability.
Analysis of Sediment Transport for Rivers in South Korea based on Data Mining technique
NASA Astrophysics Data System (ADS)
Jang, Eun-kyung; Ji, Un; Yeo, Woonkwang
2017-04-01
The purpose of this study is to calculate of sediment discharge assessment using data mining in South Korea. The Model Tree was selected for this study which is the most suitable technique to explicitly analyze the relationship between input and output variables in various and diverse databases among the Data Mining. In order to derive the sediment discharge equation using the Model Tree of Data Mining used the dimensionless variables used in Engelund and Hansen, Ackers and White, Brownlie and van Rijn equations as the analytical condition. In addition, total of 14 analytical conditions were set considering the conditions dimensional variables and the combination conditions of the dimensionless variables and the dimensional variables according to the relationship between the flow and the sediment transport. For each case, the analysis results were analyzed by mean of discrepancy ratio, root mean square error, mean absolute percent error, correlation coefficient. The results showed that the best fit was obtained by using five dimensional variables such as velocity, depth, slope, width and Median Diameter. And closest approximation to the best goodness-of-fit was estimated from the depth, slope, width, main grain size of bed material and dimensionless tractive force and except for the slope in the single variable. In addition, the three types of Model Tree that are most appropriate are compared with the Ackers and White equation which is the best fit among the existing equations, the mean discrepancy ration and the correlation coefficient of the Model Tree are improved compared to the Ackers and White equation.
Daily Rainfall Simulation Using Climate Variables and Nonhomogeneous Hidden Markov Model
NASA Astrophysics Data System (ADS)
Jung, J.; Kim, H. S.; Joo, H. J.; Han, D.
2017-12-01
Markov chain is an easy method to handle when we compare it with other ones for the rainfall simulation. However, it also has limitations in reflecting seasonal variability of rainfall or change on rainfall patterns caused by climate change. This study applied a Nonhomogeneous Hidden Markov Model(NHMM) to consider these problems. The NHMM compared with a Hidden Markov Model(HMM) for the evaluation of a goodness of the model. First, we chose Gum river basin in Korea to apply the models and collected daily rainfall data from the stations. Also, the climate variables of geopotential height, temperature, zonal wind, and meridional wind date were collected from NCEP/NCAR reanalysis data to consider external factors affecting the rainfall event. We conducted a correlation analysis between rainfall and climate variables then developed a linear regression equation using the climate variables which have high correlation with rainfall. The monthly rainfall was obtained by the regression equation and it became input data of NHMM. Finally, the daily rainfall by NHMM was simulated and we evaluated the goodness of fit and prediction capability of NHMM by comparing with those of HMM. As a result of simulation by HMM, the correlation coefficient and root mean square error of daily/monthly rainfall were 0.2076 and 10.8243/131.1304mm each. In case of NHMM, the correlation coefficient and root mean square error of daily/monthly rainfall were 0.6652 and 10.5112/100.9865mm each. We could verify that the error of daily and monthly rainfall simulated by NHMM was improved by 2.89% and 22.99% compared with HMM. Therefore, it is expected that the results of the study could provide more accurate data for hydrologic analysis. Acknowledgements This research was supported by Basic Science Research Program through the National Research Foundation of Korea(NRF) funded by the Ministry of Science, ICT & Future Planning(2017R1A2B3005695)
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Anthropomorphic teleoperation: Controlling remote manipulators with the DataGlove
NASA Technical Reports Server (NTRS)
Hale, J. P., II
1992-01-01
A two phase effort was conducted to assess the capabilities and limitations of the DataGlove, a lightweight glove input device that can output signals in real-time based on hand shape, orientation, and movement. The first phase was a period for system integration, checkout, and familiarization in a virtual environment. The second phase was a formal experiment using the DataGlove as input device to control the protoflight manipulator arm (PFMA) - a large telerobotic arm with an 8-ft reach. The first phase was used to explore and understand how the DataGlove functions in a virtual environment, build a virtual PFMA, and consider and select a reasonable teleoperation control methodology. Twelve volunteers (six males and six females) participated in a 2 x 3 (x 2) full-factorial formal experiment using the DataGlove to control the PFMA in a simple retraction, slewing, and insertion task. Two within-subjects variables, time delay (0, 1, and 2 seconds) and PFMA wrist flexibility (rigid/flexible), were manipulated. Gender served as a blocking variable. A main effect of time delay was found for slewing and total task times. Correlations among questionnaire responses, and between questionnaire responses and session mean scores and gender were computed. The experimental data were also compared with data collected in another study that used a six degree-of-freedom handcontroller to control the PFMA in the same task. It was concluded that the DataGlove is a legitimate teleoperations input device that provides a natural, intuitive user interface. From an operational point of view, it compares favorably with other 'standard' telerobotic input devices and should be considered in future trades in teleoperation systems' designs.
Integrative Data Analysis of Multi-Platform Cancer Data with a Multimodal Deep Learning Approach.
Liang, Muxuan; Li, Zhizhong; Chen, Ting; Zeng, Jianyang
2015-01-01
Identification of cancer subtypes plays an important role in revealing useful insights into disease pathogenesis and advancing personalized therapy. The recent development of high-throughput sequencing technologies has enabled the rapid collection of multi-platform genomic data (e.g., gene expression, miRNA expression, and DNA methylation) for the same set of tumor samples. Although numerous integrative clustering approaches have been developed to analyze cancer data, few of them are particularly designed to exploit both deep intrinsic statistical properties of each input modality and complex cross-modality correlations among multi-platform input data. In this paper, we propose a new machine learning model, called multimodal deep belief network (DBN), to cluster cancer patients from multi-platform observation data. In our integrative clustering framework, relationships among inherent features of each single modality are first encoded into multiple layers of hidden variables, and then a joint latent model is employed to fuse common features derived from multiple input modalities. A practical learning algorithm, called contrastive divergence (CD), is applied to infer the parameters of our multimodal DBN model in an unsupervised manner. Tests on two available cancer datasets show that our integrative data analysis approach can effectively extract a unified representation of latent features to capture both intra- and cross-modality correlations, and identify meaningful disease subtypes from multi-platform cancer data. In addition, our approach can identify key genes and miRNAs that may play distinct roles in the pathogenesis of different cancer subtypes. Among those key miRNAs, we found that the expression level of miR-29a is highly correlated with survival time in ovarian cancer patients. These results indicate that our multimodal DBN based data analysis approach may have practical applications in cancer pathogenesis studies and provide useful guidelines for personalized cancer therapy.
NASA Astrophysics Data System (ADS)
Lee, C. M.; Morris, K.; Fingland, N. K.; Johnstone, K.; Pendleton, L.; Ponce, A.; Tang, C.; Griffith, J. F.; Steele, N. L.
2013-12-01
Multiple sites in the upper Los Angeles River watershed were sampled during summer 2012 and measured for Escherichia coli, enterococci, and Clostridium perfringens (vegetative cells and spores) using culture-based analyses and preserved for quantitative polymerase chain reaction (qPCR) analysis. The objective of this work includes the characterization of how well indicators correlated with each other, with respect to background levels and to 'spikes' from background, possibly indicative of a pollution input, environmental/physicochemical parameters, as well as in the context of recreational water quality standards. The 2nd objective of this work was to evaluate the economic impact of implementing qPCR at our study sites for rapid water quality monitoring. None of the species of indicators correlated well with each other (R2 < 0.1) across sites and dates when the sample set was examined in its entirety, though C. perfringens vegetative cells and spores were moderately correlated (R2 = 0.31, p = 0.07). The observation of concentration 'spikes' against background levels, suggesting a potential input of contamination, were observed on holiday sampling days and will be examined further. In general, the number of swimmers present was not linked with indicator concentrations; however, incidence of water quality exceedances (for E. coli 235 CFU or MPN/100 mL sample) were more likely to occur on the weekend or holidays (for E. coli, , suggesting that the presence/absence of swimmers may be an important variable at our sites. Clostridium perfringens may be a useful indicator at our study sites, as a comparison of vegetative to endospore forms of this organism may be used to understand how recently a contamination event or input occurred.
A neuromorphic model of motor overflow in focal hand dystonia due to correlated sensory input
NASA Astrophysics Data System (ADS)
Sohn, Won Joon; Niu, Chuanxin M.; Sanger, Terence D.
2016-10-01
Objective. Motor overflow is a common and frustrating symptom of dystonia, manifested as unintentional muscle contraction that occurs during an intended voluntary movement. Although it is suspected that motor overflow is due to cortical disorganization in some types of dystonia (e.g. focal hand dystonia), it remains elusive which mechanisms could initiate and, more importantly, perpetuate motor overflow. We hypothesize that distinct motor elements have low risk of motor overflow if their sensory inputs remain statistically independent. But when provided with correlated sensory inputs, pre-existing crosstalk among sensory projections will grow under spike-timing-dependent-plasticity (STDP) and eventually produce irreversible motor overflow. Approach. We emulated a simplified neuromuscular system comprising two anatomically distinct digital muscles innervated by two layers of spiking neurons with STDP. The synaptic connections between layers included crosstalk connections. The input neurons received either independent or correlated sensory drive during 4 days of continuous excitation. The emulation is critically enabled and accelerated by our neuromorphic hardware created in previous work. Main results. When driven by correlated sensory inputs, the crosstalk synapses gained weight and produced prominent motor overflow; the growth of crosstalk synapses resulted in enlarged sensory representation reflecting cortical reorganization. The overflow failed to recede when the inputs resumed their original uncorrelated statistics. In the control group, no motor overflow was observed. Significance. Although our model is a highly simplified and limited representation of the human sensorimotor system, it allows us to explain how correlated sensory input to anatomically distinct muscles is by itself sufficient to cause persistent and irreversible motor overflow. Further studies are needed to locate the source of correlation in sensory input.
Feasibility study of parallel optical correlation-decoding analysis of lightning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Descour, M.R.; Sweatt, W.C.; Elliott, G.R.
The optical correlator described in this report is intended to serve as an attention-focusing processor. The objective is to narrowly bracket the range of a parameter value that characterizes the correlator input. The input is a waveform collected by a satellite-borne receiver. In the correlator, this waveform is simultaneously correlated with an ensemble of ionosphere impulse-response functions, each corresponding to a different total-electron-count (TEC) value. We have found that correlation is an effective method of bracketing the range of TEC values likely to be represented by the input waveform. High accuracy in a computational sense is not required of themore » correlator. Binarization of the impulse-response functions and the input waveforms prior to correlation results in a lower correlation-peak-to-background-fluctuation (signal-to-noise) ratio than the peak that is obtained when all waveforms retain their grayscale values. The results presented in this report were obtained by means of an acousto-optic correlator previously developed at SNL as well as by simulation. An optical-processor architecture optimized for 1D correlation of long waveforms characteristic of this application is described. Discussions of correlator components, such as optics, acousto-optic cells, digital micromirror devices, laser diodes, and VCSELs are included.« less
EFFECTS OF CORRELATED PROBABILISTIC EXPOSURE MODEL INPUTS ON SIMULATED RESULTS
In recent years, more probabilistic models have been developed to quantify aggregate human exposures to environmental pollutants. The impact of correlation among inputs in these models is an important issue, which has not been resolved. Obtaining correlated data and implementi...
Graupner, Michael; Reyes, Alex D
2013-09-18
Correlations in the spiking activity of neurons have been found in many regions of the cortex under multiple experimental conditions and are postulated to have important consequences for neural population coding. While there is a large body of extracellular data reporting correlations of various strengths, the subthreshold events underlying the origin and magnitude of signal-independent correlations (called noise or spike count correlations) are unknown. Here we investigate, using intracellular recordings, how synaptic input correlations from shared presynaptic neurons translate into membrane potential and spike-output correlations. Using a pharmacologically activated thalamocortical slice preparation, we perform simultaneous recordings from pairs of layer IV neurons in the auditory cortex of mice and measure synaptic potentials/currents, membrane potentials, and spiking outputs. We calculate cross-correlations between excitatory and inhibitory inputs to investigate correlations emerging from the network. We furthermore evaluate membrane potential correlations near resting potential to study how excitation and inhibition combine and affect spike-output correlations. We demonstrate directly that excitation is correlated with inhibition thereby partially canceling each other and resulting in weak membrane potential and spiking correlations between neurons. Our data suggest that cortical networks are set up to partially cancel correlations emerging from the connections between neurons. This active decorrelation is achieved because excitation and inhibition closely track each other. Our results suggest that the numerous shared presynaptic inputs do not automatically lead to increased spiking correlations.
Observation of non-classical correlations in sequential measurements of photon polarization
NASA Astrophysics Data System (ADS)
Suzuki, Yutaro; Iinuma, Masataka; Hofmann, Holger F.
2016-10-01
A sequential measurement of two non-commuting quantum observables results in a joint probability distribution for all output combinations that can be explained in terms of an initial joint quasi-probability of the non-commuting observables, modified by the resolution errors and back-action of the initial measurement. Here, we show that the error statistics of a sequential measurement of photon polarization performed at different measurement strengths can be described consistently by an imaginary correlation between the statistics of resolution and back-action. The experimental setup was designed to realize variable strength measurements with well-controlled imaginary correlation between the statistical errors caused by the initial measurement of diagonal polarizations, followed by a precise measurement of the horizontal/vertical polarization. We perform the experimental characterization of an elliptically polarized input state and show that the same complex joint probability distribution is obtained at any measurement strength.
Artificial neural networks for modeling ammonia emissions released from sewage sludge composting
NASA Astrophysics Data System (ADS)
Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.
2012-09-01
The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).
Šiljić Tomić, Aleksandra N; Antanasijević, Davor Z; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A; Pocajt, Viktor V
2016-05-01
This paper describes the application of artificial neural network models for the prediction of biological oxygen demand (BOD) levels in the Danube River. Eighteen regularly monitored water quality parameters at 17 stations on the river stretch passing through Serbia were used as input variables. The optimization of the model was performed in three consecutive steps: firstly, the spatial influence of a monitoring station was examined; secondly, the monitoring period necessary to reach satisfactory performance was determined; and lastly, correlation analysis was applied to evaluate the relationship among water quality parameters. Root-mean-square error (RMSE) was used to evaluate model performance in the first two steps, whereas in the last step, multiple statistical indicators of performance were utilized. As a result, two optimized models were developed, a general regression neural network model (labeled GRNN-1) that covers the monitoring stations from the Danube inflow to the city of Novi Sad and a GRNN model (labeled GRNN-2) that covers the stations from the city of Novi Sad to the border with Romania. Both models demonstrated good agreement between the predicted and actually observed BOD values.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Rosenthal, William Steven; Tartakovsky, Alex; Huang, Zhenyu
2017-10-31
State and parameter estimation of power transmission networks is important for monitoring power grid operating conditions and analyzing transient stability. Wind power generation depends on fluctuating input power levels, which are correlated in time and contribute to uncertainty in turbine dynamical models. The ensemble Kalman filter (EnKF), a standard state estimation technique, uses a deterministic forecast and does not explicitly model time-correlated noise in parameters such as mechanical input power. However, this uncertainty affects the probability of fault-induced transient instability and increased prediction bias. Here a novel approach is to model input power noise with time-correlated stochastic fluctuations, and integratemore » them with the network dynamics during the forecast. While the EnKF has been used to calibrate constant parameters in turbine dynamical models, the calibration of a statistical model for a time-correlated parameter has not been investigated. In this study, twin experiments on a standard transmission network test case are used to validate our time-correlated noise model framework for state estimation of unsteady operating conditions and transient stability analysis, and a methodology is proposed for the inference of the mechanical input power time-correlation length parameter using time-series data from PMUs monitoring power dynamics at generator buses.« less
Carbon Dioxide in the Gulf of Trieste
NASA Astrophysics Data System (ADS)
Turk, D.; Malacic, V.; Degrandpre, M. D.; McGillis, W. R.
2009-04-01
Coastal marine regions such as the Gulf of Trieste (GOT) in the Northern Adriatic Sea serve as the link between carbon cycling on land and the ocean interior and potentially contribute large uncertainties in the estimate of anthropogenic CO2 uptake. This system may be either a sink or a source for atmospheric CO2. Understanding the sources and sinks as a result of biological and physical controls for air-sea carbon dioxide fluxes in coastal waters may substantially alter the current view of the global carbon budget for unique terrestrial and ocean regions such as the GOT. GOT is a semi-enclosed Mediterranean basin situated in the northern part of Adriatic Sea. It is one of the most productive regions in the Mediterranean and is affected by extreme fresh river input, phytoplankton blooms, and large changes of air-sea exchange during Bora high wind events. The unique combination of these environmental processes and relatively small size of the area makes the region an excellent study site for investigations of air-sea interaction, and changes in biology and carbon chemistry. Here we investigate biological (phytoplankton blooms) and physical (freshwater input and winds) controls on the temporal variability of pCO2 in the GOT. The aqueous CO2 was measured at the Coastal Oceanographic buoy VIDA, Slovenia using the SAMI CO2 sensor. Our results indicate that: 1) The GOT was a sink for atmospheric CO2 in late spring of 2007; 2) Aqueous pCO2 was influenced by fresh water input from rivers entering the GOT and biological production associated with high nutrient input; 3) Surface water pCO2 showed a strong correlation with SST when river plumes where not present at the buoy location, and reasonable correlation with SSS during the presence of the plume.
Wirakartakusumah, M D
1988-06-01
This paper examines the effects of public health, family planning, education, electrification, and water supply programs on fertility, child mortality, and school enrollment decisions of rural households in East Java, Indonesia. The theoretical model assumes that parents maximize a utility function, subject to 1) a budget constraint that equates income with expenditures on children (including schooling and health inputs), and 2) a production function that relates health inputs to child survival possibilities. Public programs affect prices of contraceptives, schooling and health inputs, and environmental conditions that in turn affect child survival. Data are taken from the 1980 East Java Population Survey, the Socio-economic Survey, and the Detailed Village Census. The final sample consists of 3170 rural households with married women of childbearing age. Ordinary least squares and logit regressions of recent fertility, child mortality, and school enrollment on program and household variables yielded the following findings. 1) The presence of maternal and child health clinics reduced fertility but not mortality. 2) The presence of public health centers strongly reduced mortality but not fertility. 3) The presence of contraceptive distribution centers had no effect on fertility. 4) School attendance rates were influenced positively by the availability of primary and secondary schools. 5) Health and family planning programs had no effects on schooling. 6) The availability of public latrines reduced fertility and mortality. 7) The water supply variable did not affect the dependent variables when ordinary least squares techniques were applied but had statistically significant impact when logit methods were used. 8) Electricity supply had little effect on the dependent variables. 9) The mother's schooling had a strong positive correlation with children's schooling but no effect on fertility or mortality. 10) Household expenditures were related positively to school attendance and negatively to mortality. 11) There was little or no interaction between household variables and presence of government programs. 12) Subprovincial area measures of service availability appeared more appropriate for public health and family planning services, while village-level measures appeared more appropriate for schooling.
Schneider, David M; Woolley, Sarah M N
2010-06-01
Many social animals including songbirds use communication vocalizations for individual recognition. The perception of vocalizations depends on the encoding of complex sounds by neurons in the ascending auditory system, each of which is tuned to a particular subset of acoustic features. Here, we examined how well the responses of single auditory neurons could be used to discriminate among bird songs and we compared discriminability to spectrotemporal tuning. We then used biologically realistic models of pooled neural responses to test whether the responses of groups of neurons discriminated among songs better than the responses of single neurons and whether discrimination by groups of neurons was related to spectrotemporal tuning and trial-to-trial response variability. The responses of single auditory midbrain neurons could be used to discriminate among vocalizations with a wide range of abilities, ranging from chance to 100%. The ability to discriminate among songs using single neuron responses was not correlated with spectrotemporal tuning. Pooling the responses of pairs of neurons generally led to better discrimination than the average of the two inputs and the most discriminating input. Pooling the responses of three to five single neurons continued to improve neural discrimination. The increase in discriminability was largest for groups of neurons with similar spectrotemporal tuning. Further, we found that groups of neurons with correlated spike trains achieved the largest gains in discriminability. We simulated neurons with varying levels of temporal precision and measured the discriminability of responses from single simulated neurons and groups of simulated neurons. Simulated neurons with biologically observed levels of temporal precision benefited more from pooling correlated inputs than did neurons with highly precise or imprecise spike trains. These findings suggest that pooling correlated neural responses with the levels of precision observed in the auditory midbrain increases neural discrimination of complex vocalizations.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Carvalho, Carlos; Gomes, Danielo G.; Agoulmine, Nazim; de Souza, José Neuman
2011-01-01
This paper proposes a method based on multivariate spatial and temporal correlation to improve prediction accuracy in data reduction for Wireless Sensor Networks (WSN). Prediction of data not sent to the sink node is a technique used to save energy in WSNs by reducing the amount of data traffic. However, it may not be very accurate. Simulations were made involving simple linear regression and multiple linear regression functions to assess the performance of the proposed method. The results show a higher correlation between gathered inputs when compared to time, which is an independent variable widely used for prediction and forecasting. Prediction accuracy is lower when simple linear regression is used, whereas multiple linear regression is the most accurate one. In addition to that, our proposal outperforms some current solutions by about 50% in humidity prediction and 21% in light prediction. To the best of our knowledge, we believe that we are probably the first to address prediction based on multivariate correlation for WSN data reduction. PMID:22346626
Gu, Jianwei; Pitz, Mike; Breitner, Susanne; Birmili, Wolfram; von Klot, Stephanie; Schneider, Alexandra; Soentgen, Jens; Reller, Armin; Peters, Annette; Cyrys, Josef
2012-10-01
The success of epidemiological studies depends on the use of appropriate exposure variables. The purpose of this study is to extract a relatively small selection of variables characterizing ambient particulate matter from a large measurement data set. The original data set comprised a total of 96 particulate matter variables that have been continuously measured since 2004 at an urban background aerosol monitoring site in the city of Augsburg, Germany. Many of the original variables were derived from measured particle size distribution (PSD) across the particle diameter range 3 nm to 10 μm, including size-segregated particle number concentration, particle length concentration, particle surface concentration and particle mass concentration. The data set was complemented by integral aerosol variables. These variables were measured by independent instruments, including black carbon, sulfate, particle active surface concentration and particle length concentration. It is obvious that such a large number of measured variables cannot be used in health effect analyses simultaneously. The aim of this study is a pre-screening and a selection of the key variables that will be used as input in forthcoming epidemiological studies. In this study, we present two methods of parameter selection and apply them to data from a two-year period from 2007 to 2008. We used the agglomerative hierarchical cluster method to find groups of similar variables. In total, we selected 15 key variables from 9 clusters which are recommended for epidemiological analyses. We also applied a two-dimensional visualization technique called "heatmap" analysis to the Spearman correlation matrix. 12 key variables were selected using this method. Moreover, the positive matrix factorization (PMF) method was applied to the PSD data to characterize the possible particle sources. Correlations between the variables and PMF factors were used to interpret the meaning of the cluster and the heatmap analyses. Copyright © 2012 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Tokuchi, Naoko; Ohte, Nobuhito; Hobara, Satoru; Kim, Su-Jin; Masanori, Katsuyama
2004-10-01
Changes in nutrient budgets and hydrological processes due to the natural disturbance of pine wilt disease (PWD) were monitored in a small, forested watershed in Japan. The disturbance caused changes in soil nitrogen transformations. Pre-disturbance, mineralized nitrogen remained in the form of NH4+, whereas in disturbed areas most mineralized nitrogen was nitrified. Stream NO3- concentrations increased following PWD. There was a delay between time of disturbance and the increase of NO3- in ground and stream waters. Stream concentrations of NO3- and cations (Ca2+ + Mg2+) were significantly correlated from 1994 to 1996, whereas the correlation among NO3-, H+, and SO42- was significant only in 1995. Although both cation exchange and SO42- adsorption buffered protons, cation exchange was the dominant and continuous mechanism for acid buffering. SO42- adsorption was variable and highly pH dependent. The disturbance also resulted in slight delayed changes of input-output nutrient balances. The nitrogen contribution to PWD litter inputs was 7.39 kmol ha-1, and nitrogen loss from streamwater was less than 0.5 kmol ha-1 year-1 throughout the observation period. This large discrepancy suggested substantial nitrogen immobilization.
The Productivity Analysis of Chennai Automotive Industry Cluster
NASA Astrophysics Data System (ADS)
Bhaskaran, E.
2014-07-01
Chennai, also called the Detroit of India, is India's second fastest growing auto market and exports auto components and vehicles to US, Germany, Japan and Brazil. For inclusive growth and sustainable development, 250 auto component industries in Ambattur, Thirumalisai and Thirumudivakkam Industrial Estates located in Chennai have adopted the Cluster Development Approach called Automotive Component Cluster. The objective is to study the Value Chain, Correlation and Data Envelopment Analysis by determining technical efficiency, peer weights, input and output slacks of 100 auto component industries in three estates. The methodology adopted is using Data Envelopment Analysis of Output Oriented Banker Charnes Cooper model by taking net worth, fixed assets, employment as inputs and gross output as outputs. The non-zero represents the weights for efficient clusters. The higher slack obtained reveals the excess net worth, fixed assets, employment and shortage in gross output. To conclude, the variables are highly correlated and the inefficient industries should increase their gross output or decrease the fixed assets or employment. Moreover for sustainable development, the cluster should strengthen infrastructure, technology, procurement, production and marketing interrelationships to decrease costs and to increase productivity and efficiency to compete in the indigenous and export market.
Portfolio of automated trading systems: complexity and learning set size issues.
Raudys, Sarunas
2013-03-01
In this paper, we consider using profit/loss histories of multiple automated trading systems (ATSs) as N input variables in portfolio management. By means of multivariate statistical analysis and simulation studies, we analyze the influences of sample size (L) and input dimensionality on the accuracy of determining the portfolio weights. We find that degradation in portfolio performance due to inexact estimation of N means and N(N - 1)/2 correlations is proportional to N/L; however, estimation of N variances does not worsen the result. To reduce unhelpful sample size/dimensionality effects, we perform a clustering of N time series and split them into a small number of blocks. Each block is composed of mutually correlated ATSs. It generates an expert trading agent based on a nontrainable 1/N portfolio rule. To increase the diversity of the expert agents, we use training sets of different lengths for clustering. In the output of the portfolio management system, the regularized mean-variance framework-based fusion agent is developed in each walk-forward step of an out-of-sample portfolio validation experiment. Experiments with the real financial data (2003-2012) confirm the effectiveness of the suggested approach.
Correction of I/Q channel errors without calibration
Doerry, Armin W.; Tise, Bertice L.
2002-01-01
A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.
Sparse distributed memory and related models
NASA Technical Reports Server (NTRS)
Kanerva, Pentti
1992-01-01
Described here is sparse distributed memory (SDM) as a neural-net associative memory. It is characterized by two weight matrices and by a large internal dimension - the number of hidden units is much larger than the number of input or output units. The first matrix, A, is fixed and possibly random, and the second matrix, C, is modifiable. The SDM is compared and contrasted to (1) computer memory, (2) correlation-matrix memory, (3) feet-forward artificial neural network, (4) cortex of the cerebellum, (5) Marr and Albus models of the cerebellum, and (6) Albus' cerebellar model arithmetic computer (CMAC). Several variations of the basic SDM design are discussed: the selected-coordinate and hyperplane designs of Jaeckel, the pseudorandom associative neural memory of Hassoun, and SDM with real-valued input variables by Prager and Fallside. SDM research conducted mainly at the Research Institute for Advanced Computer Science (RIACS) in 1986-1991 is highlighted.
Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi
2007-01-01
The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.
NASA Astrophysics Data System (ADS)
Sumargo, E.; Cayan, D. R.; Iacobellis, S.
2014-12-01
Obtaining accurate solar radiation input to snowmelt runoff models remains a fundamental challenge for water supply forecasters in the mountainous western U.S. The variability of cloud cover is a primary source of uncertainty in estimating surface radiation, especially given that ground-based radiometer networks in mountain terrains are sparse. Thus, remote sensed cloud properties provide a way to extend in situ observations and more importantly, to understand cloud variability in montane environment. We utilize 17 years of NASA/NOAA GOES visible albedo product with 4 km spatial and half-hour temporal resolutions to investigate daytime cloud variability in the western U.S. at elevations above 800 m. REOF/PC analysis finds that the 5 leading modes account for about two-thirds of the total daily cloud albedo variability during the whole year (ALL) and snowmelt season (AMJJ). The AMJJ PCs are significantly correlated with de-seasonalized snowmelt derived from CDWR CDEC and NRCS SNOTEL SWE data and USGS stream discharge across the western conterminous states. The sum of R2 from 7 days prior to the day of snowmelt/discharge amounts to as much as ~52% on snowmelt and ~44% on discharge variation. Spatially, the correlation patterns take on broad footprints, with strongest signals in regions of highest REOF weightings. That the response of snowmelt and streamflow to cloud variation is spread across several days indicates the cumulative effect of cloud variation on the energy budget in mountain catchments.
NASA Technical Reports Server (NTRS)
Lewis, Mark David (Inventor); Seal, Michael R. (Inventor); Hood, Kenneth Brown (Inventor); Johnson, James William (Inventor)
2007-01-01
Remotely sensed spectral image data are used to develop a Vegetation Index file which represents spatial variations of actual crop vigor throughout a field that is under cultivation. The latter information is processed to place it in a format that can be used by farm personnel to correlate and calibrate it with actually observed crop conditions existing at control points within the field. Based on the results, farm personnel formulate a prescription request, which is forwarded via email or FTP to a central processing site, where the prescription is prepared. The latter is returned via email or FTP to on-side farm personnel, who can load it into a controller on a spray rig that directly applies inputs to the field at a spatially variable rate.
NASA Technical Reports Server (NTRS)
Hood, Kenneth Brown (Inventor); Johnson, James William (Inventor); Seal, Michael R. (Inventor); Lewis, Mark David (Inventor)
2004-01-01
Remotely sensed spectral image data are used to develop a Vegetation Index file which represents spatial variations of actual crop vigor throughout a field that is under cultivation. The latter information is processed to place it in a format that can be used by farm personnel to correlate and calibrate it with actually observed crop conditions existing at control points within the field. Based on the results, farm personnel formulate a prescription request, which is forwarded via email or FTP to a central processing site, where the prescription is prepared. The latter is returned via email or FTP to on-side farm personnel, who can load it into a controller on a spray rig that directly applies inputs to the field at a spatially variable rate.
NASA Technical Reports Server (NTRS)
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
NASA Astrophysics Data System (ADS)
Duan, Shuiwang; Bianchi, Thomas S.; Shiller, Alan M.; Dria, Karl; Hatcher, Patrick G.; Carman, Kevin R.
2007-06-01
In this study, we examined the temporal and spatial variability of dissolved organic matter (DOM) abundance and composition in the lower Mississippi and Pearl rivers and effects of human and natural influences. In particular, we looked at bulk C/N ratio, stable isotopes (δ15N and δ13C) and 13C nuclear magnetic resonance (NMR) spectrometry of high molecular weight (HMW; 0.2 μm to 1 kDa) DOM. Monthly water samples were collected at one station in each river from August 2001 to 2003. Surveys of spatial variability of total dissolved organic carbon (DOC) and nitrogen (DON) were also conducted in June 2003, from 390 km downstream in the Mississippi River and from Jackson to Stennis Space Center in the Pearl River. Higher DOC (336-1170 μM), C/N ratio,% aromaticity, and more depleted δ15N (0.76-2.1‰) were observed in the Pearl than in the lower Mississippi River (223-380 μM, 4.7-11.5‰, respectively). DOC, C/N ratio, δ13C, δ15N, and % aromaticity of Pearl River HMW DOM were correlated with water discharge, which indicated a coupling between local soil inputs and regional precipitation events. Conversely, seasonal variability in the lower Mississippi River was more controlled by spatial variability of a larger integrative signal from the watershed as well as in situ DOM processing. Spatially, very little change occurred in total DOC in the downstream survey of the lower Mississippi River, compared to a decrease of 24% in the Pearl River. Differences in DOM between these two rivers were reflective of the Mississippi River having more extensive river processing of terrestrial DOM, more phytoplankton inputs, and greater anthropogenic perturbation than the Pearl River.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.
Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda
2015-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.
Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality
Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda
2016-01-01
The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102
Silvestro, Paolo Cosmo; Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio; Casa, Raffaele
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations.
Pignatti, Stefano; Yang, Hao; Yang, Guijun; Pascucci, Simone; Castaldi, Fabio
2017-01-01
Process-based models can be usefully employed for the assessment of field and regional-scale impact of drought on crop yields. However, in many instances, especially when they are used at the regional scale, it is necessary to identify the parameters and input variables that most influence the outputs and to assess how their influence varies when climatic and environmental conditions change. In this work, two different crop models, able to represent yield response to water, Aquacrop and SAFYE, were compared, with the aim to quantify their complexity and plasticity through Global Sensitivity Analysis (GSA), using Morris and EFAST (Extended Fourier Amplitude Sensitivity Test) techniques, for moderate to strong water limited climate scenarios. Although the rankings of the sensitivity indices was influenced by the scenarios used, the correlation among the rankings, higher for SAFYE than for Aquacrop, assessed by the top-down correlation coefficient (TDCC), revealed clear patterns. Parameters and input variables related to phenology and to water stress physiological processes were found to be the most influential for Aquacrop. For SAFYE, it was found that the water stress could be inferred indirectly from the processes regulating leaf growth, described in the original SAFY model. SAFYE has a lower complexity and plasticity than Aquacrop, making it more suitable to less data demanding regional scale applications, in case the only objective is the assessment of crop yield and no detailed information is sought on the mechanisms of the stress factors affecting its limitations. PMID:29107963
Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric
2016-04-01
Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Technical Reports Server (NTRS)
Deng, Yue
2014-01-01
Describes solar energy inputs contributing to ionospheric and thermospheric weather processes, including total energy amounts, distributions and the correlation between particle precipitation and Poynting flux.
Input Variability Facilitates Unguided Subcategory Learning in Adults
Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.
2015-01-01
Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081
Input Variability Facilitates Unguided Subcategory Learning in Adults.
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E
2015-06-01
This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.
Analog Correlator Based on One Bit Digital Correlator
NASA Technical Reports Server (NTRS)
Prokop, Norman (Inventor); Krasowski, Michael (Inventor)
2017-01-01
A two input time domain correlator may perform analog correlation. In order to achieve high throughput rates with reduced or minimal computational overhead, the input data streams may be hard limited through adaptive thresholding to yield two binary bit streams. Correlation may be achieved through the use of a Hamming distance calculation, where the distance between the two bit streams approximates the time delay that separates them. The resulting Hamming distance approximates the correlation time delay with high accuracy.
Speed control system for an access gate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bzorgi, Fariborz M
2012-03-20
An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less
Austin, Bradley J; Hardgrave, Natalia; Inlander, Ethan; Gallipeau, Cory; Entrekin, Sally; Evans-White, Michelle A
2015-10-01
Construction of unconventional natural gas (UNG) infrastructure (e.g., well pads, pipelines) is an increasingly common anthropogenic stressor that increases potential sediment erosion. Increased sediment inputs into nearby streams may decrease autotrophic processes through burial and scour, or sediment bound nutrients could have a positive effect through alleviating potential nutrient limitations. Ten streams with varying catchment UNG well densities (0-3.6 wells/km(2)) were sampled during winter and spring of 2010 and 2011 to examine relationships between landscape scale disturbances associated with UNG activity and stream periphyton [chlorophyll a (Chl a)] and gross primary production (GPP). Local scale variables including light availability and water column physicochemical variables were measured for each study site. Correlation analyses examined the relationships of autotrophic processes and local scale variables with the landscape scale variables percent pasture land use and UNG metrics (well density and well pad inverse flow path length). Both GPP and Chl a were primarily positively associated with the UNG activity metrics during most sample periods; however, neither landscape variables nor response variables correlated well with local scale factors. These positive correlations do not confirm causation, but they do suggest that it is possible that UNG development can alleviate one or more limiting factors on autotrophic production within these streams. A secondary manipulative study was used to examine the link between nutrient limitation and algal growth across a gradient of streams impacted by natural gas activity. Nitrogen limitation was common among minimally impacted stream reaches and was alleviated in streams with high UNG activity. These data provide evidence that UNG may stimulate the primary production of Fayetteville shale streams via alleviation of N-limitation. Restricting UNG activities from the riparian zone along with better enforcement of best management practices should help reduce these possible impacts of UNG activities on stream autotrophic processes. Copyright © 2015 Elsevier B.V. All rights reserved.
Estuary-ocean connectivity: Fast physics, slow biology
Raimonet, Mélanie; Cloern, James E.
2017-01-01
Estuaries are connected to both land and ocean so their physical, chemical, and biological dynamics are influenced by climate patterns over watersheds and ocean basins. We explored climate-driven oceanic variability as a source of estuarine variability by comparing monthly time series of temperature and chlorophyll-a inside San Francisco Bay with those in adjacent shelf waters of the California Current System (CCS) that are strongly responsive to wind-driven upwelling. Monthly temperature fluctuations inside and outside the Bay were synchronous, but their correlations weakened with distance from the ocean. These results illustrate how variability of coastal water temperature (and associated properties such as nitrate and oxygen) propagates into estuaries through fast water exchanges that dissipate along the estuary. Unexpectedly, there was no correlation between monthly chlorophyll-a variability inside and outside the Bay. However, at the annual scale Bay chlorophyll-a was significantly correlated with the Spring Transition Index (STI) that sets biological production supporting fish recruitment in the CCS. Wind forcing of the CCS shifted in the late 1990s when the STI advanced 40 days. This shift was followed, with lags of 1–3 years, by 3- to 19-fold increased abundances of five ocean-produced demersal fish and crustaceans and 2.5-fold increase of summer chlorophyll-a in the Bay. These changes reflect a slow biological process of estuary–ocean connectivity operating through the immigration of fish and crustaceans that prey on bivalves, reduce their grazing pressure, and allow phytoplankton biomass to build. We identified clear signals of climate-mediated oceanic variability in this estuary and discovered that the response patterns vary with the process of connectivity and the timescale of ocean variability. This result has important implications for managing nutrient inputs to estuaries connected to upwelling systems, and for assessing their responses to changing patterns of upwelling timing and intensity as the planet continues to warm.
Simulations of Control Schemes for Inductively Coupled Plasma Sources
NASA Astrophysics Data System (ADS)
Ventzek, P. L. G.; Oda, A.; Shon, J. W.; Vitello, P.
1997-10-01
Process control issues are becoming increasingly important in plasma etching. Numerical experiments are an excellent test-bench for evaluating a proposed control system. Models are generally reliable enough to provide information about controller robustness, fitness of diagnostics. We will present results from a two dimensional plasma transport code with a multi-species plasma chemstry obtained from a global model. [1-2] We will show a correlation of external etch parameters (e.g. input power) with internal plasma parameters (e.g. species fluxes) which in turn are correlated with etch results (etch rate, uniformity, and selectivity) either by comparison to experiment or by using a phenomenological etch model. After process characterization, a control scheme can be evaluated since the relationship between the variable to be controlled (e.g. uniformity) is related to the measurable variable (e.g. a density) and external parameter (e.g. coil current). We will present an evaluation using the HBr-Cl2 system as an example. [1] E. Meeks and J. W. Shon, IEEE Trans. on Plasma Sci., 23, 539, 1995. [2] P. Vitello, et al., IEEE Trans. on Plasma Sci., 24, 123, 1996.
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
Suprathreshold stochastic resonance in neural processing tuned by correlation.
Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng
2011-07-01
Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.
Suprathreshold stochastic resonance in neural processing tuned by correlation
NASA Astrophysics Data System (ADS)
Durrant, Simon; Kang, Yanmei; Stocks, Nigel; Feng, Jianfeng
2011-07-01
Suprathreshold stochastic resonance (SSR) is examined in the context of integrate-and-fire neurons, with an emphasis on the role of correlation in the neuronal firing. We employed a model based on a network of spiking neurons which received synaptic inputs modeled by Poisson processes stimulated by a stepped input signal. The smoothed ensemble firing rate provided an output signal, and the mutual information between this signal and the input was calculated for networks with different noise levels and different numbers of neurons. It was found that an SSR effect was present in this context. We then examined a more biophysically plausible scenario where the noise was not controlled directly, but instead was tuned by the correlation between the inputs. The SSR effect remained present in this scenario with nonzero noise providing improved information transmission, and it was found that negative correlation between the inputs was optimal. Finally, an examination of SSR in the context of this model revealed its connection with more traditional stochastic resonance and showed a trade-off between supratheshold and subthreshold components. We discuss these results in the context of existing empirical evidence concerning correlations in neuronal firing.
Fan, Shu-Xiang; Huang, Wen-Qian; Li, Jiang-Bo; Guo, Zhi-Ming; Zhaq, Chun-Jiang
2014-10-01
In order to detect the soluble solids content(SSC)of apple conveniently and rapidly, a ring fiber probe and a portable spectrometer were applied to obtain the spectroscopy of apple. Different wavelength variable selection methods, including unin- formative variable elimination (UVE), competitive adaptive reweighted sampling (CARS) and genetic algorithm (GA) were pro- posed to select effective wavelength variables of the NIR spectroscopy of the SSC in apple based on PLS. The back interval LS- SVM (BiLS-SVM) and GA were used to select effective wavelength variables based on LS-SVM. Selected wavelength variables and full wavelength range were set as input variables of PLS model and LS-SVM model, respectively. The results indicated that PLS model built using GA-CARS on 50 characteristic variables selected from full-spectrum which had 1512 wavelengths achieved the optimal performance. The correlation coefficient (Rp) and root mean square error of prediction (RMSEP) for prediction sets were 0.962, 0.403°Brix respectively for SSC. The proposed method of GA-CARS could effectively simplify the portable detection model of SSC in apple based on near infrared spectroscopy and enhance the predictive precision. The study can provide a reference for the development of portable apple soluble solids content spectrometer.
Long-term limnological data from the larger lakes of Yellowstone National Park, Wyoming, USA
Theriot, E.C.; Fritz, S.C.; Gresswell, Robert E.
1997-01-01
Long-term limnological data from the four largest lakes in Yellowstone National Park (Yellowstone, Lewis, Shoshone, Heart) are used to characterize their limnology and patterns of temporal and spatial variability. Heart Lake has distinctively high concentrations of dissolved materials, apparently reflecting high thermal inputs. Shoshone and Lewis lakes have the highest total SiO2 concentrations (averaging over 23.5 mg L-1), apparently as a result of the rhyolitic drainage basins. Within Yellowstone Lake spatial variability is low and ephemeral for most measured variables, except that the Southeast Arm has lower average Na concentrations. Seasonal variation is evident for Secchi transparency, pH, and total-SiO2 and probably reflects seasonal changes in phytoplankton biomass and productivity. Total dissolved solids (TDS) and total-SiO2 generally show a gradual decline from the mid-1970s through mid-1980s, followed by a sharp increase. Ratios of Kjeldahl-N to total-PO4 (KN:TP) suggest that the lakes, especially Shoshone, are often nitrogen limited. Kjeldahl-N is positively correlated with winter precipitation, but TP and total-SiO2 are counterintuitively negatively correlated with precipitation. We speculate that increased winter precipitation, rather than watershed fires, increases N-loading which, in turn, leads to increased demand for TP and total SiO2.
Stimulus Dependence of Correlated Variability across Cortical Areas
Cohen, Marlene R.
2016-01-01
The way that correlated trial-to-trial variability between pairs of neurons in the same brain area (termed spike count or noise correlation, rSC) depends on stimulus or task conditions can constrain models of cortical circuits and of the computations performed by networks of neurons (Cohen and Kohn, 2011). In visual cortex, rSC tends not to depend on stimulus properties (Kohn and Smith, 2005; Huang and Lisberger, 2009) but does depend on cognitive factors like visual attention (Cohen and Maunsell, 2009; Mitchell et al., 2009). However, neurons across visual areas respond to any visual stimulus or contribute to any perceptual decision, and the way that information from multiple areas is combined to guide perception is unknown. To gain insight into these issues, we recorded simultaneously from neurons in two areas of visual cortex (primary visual cortex, V1, and the middle temporal area, MT) while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. Correlations across, but not within, areas depend on stimulus direction and the presence of a second stimulus, and attention has opposite effects on correlations within and across areas. This observed pattern of cross-area correlations is predicted by a normalization model where MT units sum V1 inputs that are passed through a divisive nonlinearity. Together, our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. SIGNIFICANCE STATEMENT Correlations in the responses of pairs of neurons within the same cortical area have been a subject of growing interest in systems neuroscience. However, correlated variability between different cortical areas is likely just as important. We recorded simultaneously from neurons in primary visual cortex and the middle temporal area while rhesus monkeys viewed different visual stimuli in different attention conditions. We found that correlations between neurons in different areas depend on stimulus and attention conditions in very different ways than do correlations within an area. The observed pattern of cross-area correlations was predicted by a simple normalization model. Our results provide insight into how neurons in different areas interact and constrain models of the neural computations performed across cortical areas. PMID:27413163
NASA Astrophysics Data System (ADS)
Schiepers, Christiaan; Hoh, Carl K.; Dahlbom, Magnus; Wu, Hsiao-Ming; Phelps, Michael E.
1999-05-01
PET imaging can quantify metabolic processes in-vivo; this requires the measurement of an input function which is invasive and labor intensive. A non-invasive, semi-automated, image based method of input function generation would be efficient, patient friendly, and allow quantitative PET to be applied routinely. A fully automated procedure would be ideal for studies across institutions. Factor analysis (FA) was applied as processing tool for definition of temporally changing structures in the field of view. FA has been proposed earlier, but the perceived mathematical difficulty has prevented widespread use. FA was utilized to delineate structures and extract blood and tissue time-activity-curves (TACs). These TACs were used as input and output functions for tracer kinetic modeling, the results of which were compared with those from an input function obtained with serial blood sampling. Dynamic image data of myocardial perfusion studies with N-13 ammonia, O-15 water, or Rb-82, cancer studies with F-18 FDG, and skeletal studies with F-18 fluoride were evaluated. Correlation coefficients of kinetic parameters obtained with factor and plasma input functions were high. Linear regression usually furnished a slope near unity. Processing time was 7 min/patient on an UltraSPARC. Conclusion: FA can non-invasively generate input functions from image data eliminating the need for blood sampling. Output (tissue) functions can be simultaneously generated. The method is simple, requires no sophisticated operator interaction and has little inter-operator variability. FA is well suited for studies across institutions and standardized evaluations.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
NASA Technical Reports Server (NTRS)
Pera, R. J.; Onat, E.; Klees, G. W.; Tjonneland, E.
1977-01-01
Weight and envelope dimensions of aircraft gas turbine engines are estimated within plus or minus 5% to 10% using a computer method based on correlations of component weight and design features of 29 data base engines. Rotating components are estimated by a preliminary design procedure where blade geometry, operating conditions, material properties, shaft speed, hub-tip ratio, etc., are the primary independent variables used. The development and justification of the method selected, the various methods of analysis, the use of the program, and a description of the input/output data are discussed.
NASA Astrophysics Data System (ADS)
Flynn, S.; Knipp, D. J.; Matsuo, T.; Mlynczak, M. G.; Hunt, L. A.
2017-12-01
Storm time energy input to the upper atmosphere is countered by infrared radiative emissions from nitric oxide (NO). The temporal profile of these energy sources and losses strongly control thermospheric density profiles, which in turn affect the drag experienced by low Earth orbiting satellites. Storm time processes create NO. In some extreme cases an overabundance of NO emissions unexpectedly decreases atmospheric temperature and density to lower than pre-storm values. Quantifying the spatial and temporal variability of the NO emissions using eigenmodes will increase the understanding of how upper atmospheric NO behaves, and could be used to increase the accuracy of future space weather and climate models. Thirteen years of NO flux data, observed at 100-250 km altitude by the SABER instrument onboard the TIMED satellite, is decomposed into five empirical orthogonal functions (EOFs) and their amplitudes to: 1) determine the strongest modes of variability in the data set, and 2) develop a compact model of NO flux. The first five EOFs account for 85% of the variability in the data, and their uncertainty is verified using cross-validation analysis. Although these linearly independent EOFs are not necessarily independent in a geophysical sense, the first three EOFs correlate strongly with different geophysical processes. The first EOF correlates strongly with Kp and F10.7, suggesting that geomagnetic storms and solar weather account for a large portion of NO flux variability. EOF 2 shows annual variations, and EOF 3 correlates with solar wind parameters. Using these relations, an empirical model of the EOF amplitudes can be derived, which could be used as a predictive tool for future NO emissions. We illustrate the NO model, highlight some of the hemispheric asymmetries, and discuss the geophysical associations of the EOFs.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Decorrelation of Neural-Network Activity by Inhibitory Feedback
Einevoll, Gaute T.; Diesmann, Markus
2012-01-01
Correlations in spike-train ensembles can seriously impair the encoding of information by their spatio-temporal structure. An inevitable source of correlation in finite neural networks is common presynaptic input to pairs of neurons. Recent studies demonstrate that spike correlations in recurrent neural networks are considerably smaller than expected based on the amount of shared presynaptic input. Here, we explain this observation by means of a linear network model and simulations of networks of leaky integrate-and-fire neurons. We show that inhibitory feedback efficiently suppresses pairwise correlations and, hence, population-rate fluctuations, thereby assigning inhibitory neurons the new role of active decorrelation. We quantify this decorrelation by comparing the responses of the intact recurrent network (feedback system) and systems where the statistics of the feedback channel is perturbed (feedforward system). Manipulations of the feedback statistics can lead to a significant increase in the power and coherence of the population response. In particular, neglecting correlations within the ensemble of feedback channels or between the external stimulus and the feedback amplifies population-rate fluctuations by orders of magnitude. The fluctuation suppression in homogeneous inhibitory networks is explained by a negative feedback loop in the one-dimensional dynamics of the compound activity. Similarly, a change of coordinates exposes an effective negative feedback loop in the compound dynamics of stable excitatory-inhibitory networks. The suppression of input correlations in finite networks is explained by the population averaged correlations in the linear network model: In purely inhibitory networks, shared-input correlations are canceled by negative spike-train correlations. In excitatory-inhibitory networks, spike-train correlations are typically positive. Here, the suppression of input correlations is not a result of the mere existence of correlations between excitatory (E) and inhibitory (I) neurons, but a consequence of a particular structure of correlations among the three possible pairings (EE, EI, II). PMID:23133368
Effects of Anthropogenic Nitrogen Loading on Riverine Nitrogen Export in the Northeastern USA
NASA Astrophysics Data System (ADS)
Boyer, E. W.; Goodale, C. L.; Howarth, R. W.
2001-05-01
Human activities have greatly altered the nitrogen (N) cycle, accelerating the rate of N fixation in landscapes and delivery of N to water bodies. To examine the effects of anthropogenic N inputs on riverine N export, we quantified N inputs and riverine N loss for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantified inputs of N to each catchment: atmospheric deposition, fertilizer application, agricultural and forest biological N fixation, and the net import of N in food and feed. We compared these inputs with N losses from the system in riverine export. The importance of the relative sources varies widely by watershed and is related to land use. Atmospheric deposition was the largest source (>60%) to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). Total N inputs to each catchment increased with percent cover in agriculture and urban land, and decreased with percent forest. Over the combined area of the catchments, net atmospheric deposition was the largest single source input (34%), followed by imports of N in food and feed (24%), fixation in agricultural lands (21%), fertilizer use (15%), and fixation in forests (6%). Riverine export of N is well correlated with N inputs, but it accounts for only a fraction (28%) of the total N inputs. This work provides an understanding of the sources of N in landscapes, and highlights how human activities impact N cycling in the northeast region.
Input Variability Facilitates Unguided Subcategory Learning in Adults
ERIC Educational Resources Information Center
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.
2015-01-01
Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…
Denitrification and inference of nitrogen sources in the karstic Floridan Aquifer
Heffernan, J.B.; Albertin, A.R.; Fork, M.L.; Katz, B.G.; Cohen, M.J.
2011-01-01
Aquifer denitrification is among the most poorly constrained fluxes in global and regional nitrogen budgets. The few direct measurements of denitrification in groundwaters provide limited information about its spatial and temporal variability, particularly at the scale of whole aquifers. Uncertainty in estimates of denitrification may also lead to underestimates of its effect on isotopic signatures of inorganic N, and thereby confound the inference of N source from these data. In this study, our objectives are to quantify the magnitude and variability of denitrification in the Upper Floridan Aquifer (UFA) and evaluate its effect on N isotopic signatures at the regional scale. Using dual noble gas tracers (Ne, Ar) to generate physical predictions of N2 gas concentrations for 112 observations from 61 UFA springs, we show that excess (i.e. denitrification-derived) N2 is highly variable in space and inversely correlated with dissolved oxygen (O2). Negative relationship between O2 and ??15NNO 3 across a larger dataset of 113 springs, well-constrained isotopic fractionation coefficients, and strong 15N: 18O covariation further support inferences of denitrification in this uniquely organic-matter-poor system. Despite relatively low average rates, denitrification accounted for 32% of estimated aquifer N inputs across all sampled UFA springs. Back-calculations of source ??15NNO 3 based on denitrification progression suggest that isotopically-enriched nitrate (NO3-) in many springs of the UFA reflects groundwater denitrification rather than urban- or animal-derived inputs. ?? Author(s) 2011.
Robust estimation of pulse wave transit time using group delay.
Meloni, Antonella; Zymeski, Heather; Pepe, Alessia; Lombardi, Massimo; Wood, John C
2014-03-01
To evaluate the efficiency of a novel transit time (Δt) estimation method from cardiovascular magnetic resonance flow curves. Flow curves were estimated from phase contrast images of 30 patients. Our method (TT-GD: transit time group delay) operates in the frequency domain and models the ascending aortic waveform as an input passing through a discrete-component "filter," producing the observed descending aortic waveform. The GD of the filter represents the average time delay (Δt) across individual frequency bands of the input. This method was compared with two previously described time-domain methods: TT-point using the half-maximum of the curves and TT-wave using cross-correlation. High temporal resolution flow images were studied at multiple downsampling rates to study the impact of differences in temporal resolution. Mean Δts obtained with the three methods were comparable. The TT-GD method was the most robust to reduced temporal resolution. While the TT-GD and the TT-wave produced comparable results for velocity and flow waveforms, the TT-point resulted in significant shorter Δts when calculated from velocity waveforms (difference: 1.8±2.7 msec; coefficient of variability: 8.7%). The TT-GD method was the most reproducible, with an intraobserver variability of 3.4% and an interobserver variability of 3.7%. Compared to the traditional TT-point and TT-wave methods, the TT-GD approach was more robust to the choice of temporal resolution, waveform type, and observer. Copyright © 2013 Wiley Periodicals, Inc.
Theory of nonstationary Hawkes processes
NASA Astrophysics Data System (ADS)
Tannenbaum, Neta Ravid; Burak, Yoram
2017-12-01
We expand the theory of Hawkes processes to the nonstationary case, in which the mutually exciting point processes receive time-dependent inputs. We derive an analytical expression for the time-dependent correlations, which can be applied to networks with arbitrary connectivity, and inputs with arbitrary statistics. The expression shows how the network correlations are determined by the interplay between the network topology, the transfer functions relating units within the network, and the pattern and statistics of the external inputs. We illustrate the correlation structure using several examples in which neural network dynamics are modeled as a Hawkes process. In particular, we focus on the interplay between internally and externally generated oscillations and their signatures in the spike and rate correlation functions.
Device-independent tests of quantum channels
NASA Astrophysics Data System (ADS)
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
Device-independent tests of quantum channels.
Dall'Arno, Michele; Brandsen, Sarah; Buscemi, Francesco
2017-03-01
We develop a device-independent framework for testing quantum channels. That is, we falsify a hypothesis about a quantum channel based only on an observed set of input-output correlations. Formally, the problem consists of characterizing the set of input-output correlations compatible with any arbitrary given quantum channel. For binary (i.e. two input symbols, two output symbols) correlations, we show that extremal correlations are always achieved by orthogonal encodings and measurements, irrespective of whether or not the channel preserves commutativity. We further provide a full, closed-form characterization of the sets of binary correlations in the case of: (i) any dihedrally covariant qubit channel (such as any Pauli and amplitude-damping channels) and (ii) any universally-covariant commutativity-preserving channel in an arbitrary dimension (such as any erasure, depolarizing, universal cloning and universal transposition channels).
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.
A statistical survey of heat input parameters into the cusp thermosphere
NASA Astrophysics Data System (ADS)
Moen, J. I.; Skjaeveland, A.; Carlson, H. C.
2017-12-01
Based on three winters of observational data, we present those ionosphere parameters deemed most critical to realistic space weather ionosphere and thermosphere representation and prediction, in regions impacted by variability in the cusp. The CHAMP spacecraft revealed large variability in cusp thermosphere densities, measuring frequent satellite drag enhancements, up to doublings. The community recognizes a clear need for more realistic representation of plasma flows and electron densities near the cusp. Existing average-value models produce order of magnitude errors in these parameters, resulting in large under estimations of predicted drag. We fill this knowledge gap with statistics-based specification of these key parameters over their range of observed values. The EISCAT Svalbard Radar (ESR) tracks plasma flow Vi , electron density Ne, and electron, ion temperatures Te, Ti , with consecutive 2-3 minute windshield-wipe scans of 1000x500 km areas. This allows mapping the maximum Ti of a large area within or near the cusp with high temporal resolution. In magnetic field-aligned mode the radar can measure high-resolution profiles of these plasma parameters. By deriving statistics for Ne and Ti , we enable derivation of thermosphere heating deposition under background and frictional-drag-dominated magnetic reconnection conditions. We separate our Ne and Ti profiles into quiescent and enhanced states, which are not closely correlated due to the spatial structure of the reconnection foot point. Use of our data-based parameter inputs can make order of magnitude corrections to input data driving thermosphere models, enabling removal of previous two fold drag errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Groen, E.A., E-mail: Evelyne.Groen@gmail.com; Heijungs, R.; Leiden University, Einsteinweg 2, Leiden 2333 CC
Life cycle assessment (LCA) is an established tool to quantify the environmental impact of a product. A good assessment of uncertainty is important for making well-informed decisions in comparative LCA, as well as for correctly prioritising data collection efforts. Under- or overestimation of output uncertainty (e.g. output variance) will lead to incorrect decisions in such matters. The presence of correlations between input parameters during uncertainty propagation, can increase or decrease the the output variance. However, most LCA studies that include uncertainty analysis, ignore correlations between input parameters during uncertainty propagation, which may lead to incorrect conclusions. Two approaches to include correlationsmore » between input parameters during uncertainty propagation and global sensitivity analysis were studied: an analytical approach and a sampling approach. The use of both approaches is illustrated for an artificial case study of electricity production. Results demonstrate that both approaches yield approximately the same output variance and sensitivity indices for this specific case study. Furthermore, we demonstrate that the analytical approach can be used to quantify the risk of ignoring correlations between input parameters during uncertainty propagation in LCA. We demonstrate that: (1) we can predict if including correlations among input parameters in uncertainty propagation will increase or decrease output variance; (2) we can quantify the risk of ignoring correlations on the output variance and the global sensitivity indices. Moreover, this procedure requires only little data. - Highlights: • Ignoring correlation leads to under- or overestimation of the output variance. • We demonstrated that the risk of ignoring correlation can be quantified. • The procedure proposed is generally applicable in life cycle assessment. • In some cases, ignoring correlation has a minimal effect on decision-making tools.« less
Strategies for Interactive Visualization of Large Scale Climate Simulations
NASA Astrophysics Data System (ADS)
Xie, J.; Chen, C.; Ma, K.; Parvis
2011-12-01
With the advances in computational methods and supercomputing technology, climate scientists are able to perform large-scale simulations at unprecedented resolutions. These simulations produce data that are time-varying, multivariate, and volumetric, and the data may contain thousands of time steps with each time step having billions of voxels and each voxel recording dozens of variables. Visualizing such time-varying 3D data to examine correlations between different variables thus becomes a daunting task. We have been developing strategies for interactive visualization and correlation analysis of multivariate data. The primary task is to find connection and correlation among data. Given the many complex interactions among the Earth's oceans, atmosphere, land, ice and biogeochemistry, and the sheer size of observational and climate model data sets, interactive exploration helps identify which processes matter most for a particular climate phenomenon. We may consider time-varying data as a set of samples (e.g., voxels or blocks), each of which is associated with a vector of representative or collective values over time. We refer to such a vector as a temporal curve. Correlation analysis thus operates on temporal curves of data samples. A temporal curve can be treated as a two-dimensional function where the two dimensions are time and data value. It can also be treated as a point in the high-dimensional space. In this case, to facilitate effective analysis, it is often necessary to transform temporal curve data from the original space to a space of lower dimensionality. Clustering and segmentation of temporal curve data in the original or transformed space provides us a way to categorize and visualize data of different patterns, which reveals connection or correlation of data among different variables or at different spatial locations. We have employed the power of GPU to enable interactive correlation visualization for studying the variability and correlations of a single or a pair of variables. It is desired to create a succinct volume classification that summarizes the connection among all correlation volumes with respect to various reference locations. Providing a reference location must correspond to a voxel position, the number of correlation volumes equals the total number of voxels. A brute-force solution takes all correlation volumes as the input and classifies their corresponding voxels according to their correlation volumes' distance. For large-scale time-varying multivariate data, calculating all these correlation volumes on-the-fly and analyzing the relationships among them is not feasible. We have developed a sampling-based approach for volume classification in order to reduce the computation cost of computing the correlation volumes. Users are able to employ their domain knowledge in selecting important samples. The result is a static view that captures the essence of correlation relationships; i.e., for all voxels in the same cluster, their corresponding correlation volumes are similar. This sampling-based approach enables us to obtain an approximation of correlation relations in a cost-effective manner, thus leading to a scalable solution to investigate large-scale data sets. These techniques empower climate scientists to study large data from their simulations.
NASA Technical Reports Server (NTRS)
Crosson, William L.; Smith, Eric A.
1992-01-01
The behavior of in situ measurements of surface fluxes obtained during FIFE 1987 is examined by using correlative and spectral techniques in order to assess the significance of fluctuations on various time scales, from subdiurnal up to synoptic, intraseasonal, and annual scales. The objectives of this analysis are: (1) to determine which temporal scales have a significant impact on areal averaged fluxes and (2) to design a procedure for filtering an extended flux time series that preserves the basic diurnal features and longer time scales while removing high frequency noise that cannot be attributed to site-induced variation. These objectives are accomplished through the use of a two-dimensional cross-time Fourier transform, which serves to separate processes inherently related to diurnal and subdiurnal variability from those which impact flux variations on the longer time scales. A filtering procedure is desirable before the measurements are utilized as input with an experimental biosphere model, to insure that model based intercomparisons at multiple sites are uncontaminated by input variance not related to true site behavior. Analysis of the spectral decomposition indicates that subdiurnal time scales having periods shorter than 6 hours have little site-to-site consistency and therefore little impact on areal integrated fluxes.
Alagha, Jawad S; Said, Md Azlin Md; Mogheir, Yunes
2014-01-01
Nitrate concentration in groundwater is influenced by complex and interrelated variables, leading to great difficulty during the modeling process. The objectives of this study are (1) to evaluate the performance of two artificial intelligence (AI) techniques, namely artificial neural networks and support vector machine, in modeling groundwater nitrate concentration using scant input data, as well as (2) to assess the effect of data clustering as a pre-modeling technique on the developed models' performance. The AI models were developed using data from 22 municipal wells of the Gaza coastal aquifer in Palestine from 2000 to 2010. Results indicated high simulation performance, with the correlation coefficient and the mean average percentage error of the best model reaching 0.996 and 7 %, respectively. The variables that strongly influenced groundwater nitrate concentration were previous nitrate concentration, groundwater recharge, and on-ground nitrogen load of each land use land cover category in the well's vicinity. The results also demonstrated the merit of performing clustering of input data prior to the application of AI models. With their high performance and simplicity, the developed AI models can be effectively utilized to assess the effects of future management scenarios on groundwater nitrate concentration, leading to more reasonable groundwater resources management and decision-making.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Missing pulse detector for a variable frequency source
Ingram, Charles B.; Lawhorn, John H.
1979-01-01
A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.
Input and language development in bilingually developing children.
Hoff, Erika; Core, Cynthia
2013-11-01
Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Kondapalli, S. P.
2017-12-01
In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.
State-Space System Realization with Input- and Output-Data Correlation
NASA Technical Reports Server (NTRS)
Juang, Jer-Nan
1997-01-01
This paper introduces a general version of the information matrix consisting of the autocorrelation and cross-correlation matrices of the shifted input and output data. Based on the concept of data correlation, a new system realization algorithm is developed to create a model directly from input and output data. The algorithm starts by computing a special type of correlation matrix derived from the information matrix. The special correlation matrix provides information on the system-observability matrix and the state-vector correlation. A system model is then developed from the observability matrix in conjunction with other algebraic manipulations. This approach leads to several different algorithms for computing system matrices for use in representing the system model. The relationship of the new algorithms with other realization algorithms in the time and frequency domains is established with matrix factorization of the information matrix. Several examples are given to illustrate the validity and usefulness of these new algorithms.
NASA Astrophysics Data System (ADS)
Galford, G. L.; Fiske, G. J.; Sedano, F.; Michelson, H.
2016-12-01
Agriculture in sub-Saharan Africa is characterized by smallholder production and low yields ( 1 ton ha-1 year-1 since records began in 1961) for staple food crops such as maize (Zea mays). Many years of low-input farming have depleted much of the region's agricultural land of critical soil carbon and nitrogen, further reducing yield potentials. Malawi is a 98,000 km2 subtropical nation with a short rainy season from November to May, with most rainfall occurring between December and mid-April. This short growing season supports the cultivation of one primary crop, maize. In Malawi, many smallholder farmers face annual nutrient deficits as nutrients removed as grain harvest and residues are beyond replenishment levels. As a result, Malawi has had stagnant maize yields averaging 1.2 ton ha-1 year-1 for decades. After multiple years of drought and widespread hunger in the early 2000s, Malawi introduced an agricultural input support program (fertilizer and seed subsidy) in time for the 2006 harvest that was designed to restore soil nutrients, improve maize production, and decrease dependence on food aid. Malawi's subsidy program targets 50-67% of smallholder farmers who cultivate half a hectare or less, yet collectively supply 80% of the country's maize. The country has achieved significant increases in crop yields (now 2 tons/ha/year) and, as our analysis shows, benefited from a new resilience against drought. We utilized Landsat time series to determine cropland extent from 2000-present and identify areas of marginal and/or intermittent production. We found a strong latitudinal gradient of precipitation variability from north to south in CHIRPS data. We used the precipitation variability to normalize trends in a productivity proxy derived from MODIS EVI. After normalization of productivity to precipitation variability, we found significant productivity trends correlated to subsidy distribution. This work was conducted with Google's Earth Engine, a cloud-based platform for data storage and analysis with unprecedented speed and efficient computing by making use of Google's computing infrastructure.
Assessment of uncertainties of the models used in thermal-hydraulic computer codes
NASA Astrophysics Data System (ADS)
Gricay, A. S.; Migrov, Yu. A.
2015-09-01
The article deals with matters concerned with the problem of determining the statistical characteristics of variable parameters (the variation range and distribution law) in analyzing the uncertainty and sensitivity of calculation results to uncertainty in input data. A comparative analysis of modern approaches to uncertainty in input data is presented. The need to develop an alternative method for estimating the uncertainty of model parameters used in thermal-hydraulic computer codes, in particular, in the closing correlations of the loop thermal hydraulics block, is shown. Such a method shall feature the minimal degree of subjectivism and must be based on objective quantitative assessment criteria. The method includes three sequential stages: selecting experimental data satisfying the specified criteria, identifying the key closing correlation using a sensitivity analysis, and carrying out case calculations followed by statistical processing of the results. By using the method, one can estimate the uncertainty range of a variable parameter and establish its distribution law in the above-mentioned range provided that the experimental information is sufficiently representative. Practical application of the method is demonstrated taking as an example the problem of estimating the uncertainty of a parameter appearing in the model describing transition to post-burnout heat transfer that is used in the thermal-hydraulic computer code KORSAR. The performed study revealed the need to narrow the previously established uncertainty range of this parameter and to replace the uniform distribution law in the above-mentioned range by the Gaussian distribution law. The proposed method can be applied to different thermal-hydraulic computer codes. In some cases, application of the method can make it possible to achieve a smaller degree of conservatism in the expert estimates of uncertainties pertinent to the model parameters used in computer codes.
Mix or un-mix? Trace element segregation from a heterogeneous mantle, simulated.
NASA Astrophysics Data System (ADS)
Katz, R. F.; Keller, T.; Warren, J. M.; Manley, G.
2016-12-01
Incompatible trace-element concentrations vary in mid-ocean ridge lavas and melt inclusions by an order of magnitude or more, even in samples from the same location. This variability has been attributed to channelised melt flow [Spiegelman & Kelemen, 2003], which brings enriched, low-degree melts to the surface in relative isolation from depleted inter-channel melts. We re-examine this hypothesis using a new melting-column model that incorporates mantle volatiles [Keller & Katz 2016]. Volatiles cause a deeper onset of channelisation: their corrosivity is maximum at the base of the silicate melting regime. We consider how source heterogeneity and melt transport shape trace-element concentrations in basaltic lavas. We use both equilibrium and non-equilibrium formulations [Spiegelman 1996]. In particular, we evaluate the effect of melt transport on probability distributions of trace element concentration, comparing the inflow distribution in the mantle with the outflow distribution in the magma. Which features of melt transport preserve, erase or overprint input correlations between elements? To address this we consider various hypotheses about mantle heterogeneity, allowing for spatial structure in major components, volatiles and trace elements. Of interest are the roles of wavelength, amplitude, and correlation of heterogeneity fields. To investigate how different modes of melt transport affect input distributions, we compare melting models that produce either shallow or deep channelisation, or none at all.References:Keller & Katz (2016). The Role of Volatiles in Reactive Melt Transport in the Asthenosphere. Journal of Petrology, http://doi.org/10.1093/petrology/egw030. Spiegelman (1996). Geochemical consequences of melt transport in 2-D: The sensitivity of trace elements to mantle dynamics. Earth and Planetary Science Letters, 139, 115-132. Spiegelman & Kelemen (2003). Extreme chemical variability as a consequence of channelized melt transport. Geochemistry Geophysics Geosystems, http://doi.org/10.1029/2002GC000336
Technical Efficiency of Automotive Industry Cluster in Chennai
NASA Astrophysics Data System (ADS)
Bhaskaran, E.
2012-07-01
Chennai is also called as Detroit of India due to its automotive industry presence producing over 40 % of the India's vehicle and components. During 2001-2002, diagnostic study was conducted on the Automotive Component Industries (ACI) in Ambattur Industrial Estate, Chennai and in SWOT analysis it was found that it had faced problems on infrastructure, technology, procurement, production and marketing. In the year 2004-2005 under the cluster development approach (CDA), they formed Chennai auto cluster, under public private partnership concept, received grant from Government of India, Government of Tamil Nadu, Ambattur Municipality, bank loans and stake holders. This results development in infrastructure, technology, procurement, production and marketing interrelationships among ACI. The objective is to determine the correlation coefficient, regression equation, technical efficiency, peer weights, slack variables and return to scale of cluster before and after the CDA. The methodology adopted is collection of primary data from ACI and analyzing using data envelopment analysis (DEA) of input oriented Banker-Charnes-Cooper model. There is significant increase in correlation coefficient and the regression analysis reveals that for one percent increase in employment and net worth, the gross output increases significantly after the CDA. The DEA solver gives the technical efficiency of ACI by taking shift, employment, net worth as input data and quality, gross output and export ratio as output data. From the technical score and ranking of ACI, it is found that there is significant increase in technical efficiency of ACI when compared to CDA. The slack variables obtained clearly reveals the excess employment and net worth and no shortage of gross output. To conclude there is increase in technical efficiency of not only Chennai auto cluster in general but also Chennai auto components industries in particular.
The impact of AMO and NAO in Western Iberia during the Late Holocene
NASA Astrophysics Data System (ADS)
Hernandez, A.; Leira, M.; Trigo, R.; Vázquez-Loureiro, D.; Carballeira, R.; Sáez, A.
2016-12-01
High mountain lakes in the Iberian Peninsula are particularly sensitive to the influence of North Atlantic large-scale modes of climate variability due to their geographical position and the reduced anthropic disturbances. In this context, Serra da Estrela (Portugal), the westernmost range of the Sistema Central, constitutes a physical barrier to air masses coming from the Atlantic Ocean. However, long-term climate reconstructions have not yet been conducted. We present a climate reconstruction of this region based on facies analysis, X-ray fluorescence core scanning, elemental and isotope geochemistry on bulk organic matter and a preliminary study of diatom assemblages from the sedimentary record of Lake Peixão (1677 m a.s.l.; Serra da Estrela) for the last ca. 3500 years. A multivariate statistical analysis has been performed to recognize the main environmental factors controlling the sedimentary infill. Our results reveal that two main processes explain the 70% of the total variance. Thus, changes in primary productivity, reflected in organic matter accumulation, and variations in runoff, related to external particles input, explain 53% and 17% respectively. Additionally, evidence of changes in productivity and water level regime recorded as variations in diatom assemblages correlate well with our interpretations. A comparison between the lake productivity changes and previous Atlantic Multidecadal Oscillation (AMO) reconstructions shows a good correlation, suggesting this climate mode as the main driver over lacustrine primary productivity at multi-decadal scales. In turn, changes in terrigenous inputs, linked to precipitation, seem to be more influenced by the winter North Atlantic Oscillation (NAO) variability. Hence, our results highlight that although the climate regime in this area is clearly influenced by the NAO, the AMO also plays a key role at long-term time-scales.
NASA Astrophysics Data System (ADS)
Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.
2017-12-01
Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.
Estimating maize production in Kenya using NDVI: Some statistical considerations
Lewis, J.E.; Rowland, James; Nadeau , A.
1998-01-01
A regression model approach using a normalized difference vegetation index (NDVI) has the potential for estimating crop production in East Africa. However, before production estimation can become a reality, the underlying model assumptions and statistical nature of the sample data (NDVI and crop production) must be examined rigorously. Annual maize production statistics from 1982-90 for 36 agricultural districts within Kenya were used as the dependent variable; median area NDVI (independent variable) values from each agricultural district and year were extracted from the annual maximum NDVI data set. The input data and the statistical association of NDVI with maize production for Kenya were tested systematically for the following items: (1) homogeneity of the data when pooling the sample, (2) gross data errors and influence points, (3) serial (time) correlation, (4) spatial autocorrelation and (5) stability of the regression coefficients. The results of using a simple regression model with NDVI as the only independent variable are encouraging (r 0.75, p 0.05) and illustrate that NDVI can be a responsive indicator of maize production, especially in areas of high NDVI spatial variability, which coincide with areas of production variability in Kenya.
Ventricular repolarization variability for hypoglycemia detection.
Ling, Steve; Nguyen, H T
2011-01-01
Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.
A Discrete Fracture Network Model with Stress-Driven Nucleation and Growth
NASA Astrophysics Data System (ADS)
Lavoine, E.; Darcel, C.; Munier, R.; Davy, P.
2017-12-01
The realism of Discrete Fracture Network (DFN) models, beyond the bulk statistical properties, relies on the spatial organization of fractures, which is not issued by purely stochastic DFN models. The realism can be improved by injecting prior information in DFN from a better knowledge of the geological fracturing processes. We first develop a model using simple kinematic rules for mimicking the growth of fractures from nucleation to arrest, in order to evaluate the consequences of the DFN structure on the network connectivity and flow properties. The model generates fracture networks with power-law scaling distributions and a percentage of T-intersections that are consistent with field observations. Nevertheless, a larger complexity relying on the spatial variability of natural fractures positions cannot be explained by the random nucleation process. We propose to introduce a stress-driven nucleation in the timewise process of this kinematic model to study the correlations between nucleation, growth and existing fracture patterns. The method uses the stress field generated by existing fractures and remote stress as an input for a Monte-Carlo sampling of nuclei centers at each time step. Networks so generated are found to have correlations over a large range of scales, with a correlation dimension that varies with time and with the function that relates the nucleation probability to stress. A sensibility analysis of input parameters has been performed in 3D to quantify the influence of fractures and remote stress field orientations.
Evaluating the effects of variable water chemistry on bacterial transport during infiltration.
Zhang, Haibo; Nordin, Nahjan Amer; Olson, Mira S
2013-07-01
Bacterial infiltration through the subsurface has been studied experimentally under different conditions of interest and is dependent on a variety of physical, chemical and biological factors. However, most bacterial transport studies fail to adequately represent the complex processes occurring in natural systems. Bacteria are frequently detected in stormwater runoff, and may present risk of microbial contamination during stormwater recharge into groundwater. Mixing of stormwater runoff with groundwater during infiltration results in changes in local solution chemistry, which may lead to changes in both bacterial and collector surface properties and subsequent bacterial attachment rates. This study focuses on quantifying changes in bacterial transport behavior under variable solution chemistry, and on comparing the influences of chemical variability and physical variability on bacterial attachment rates. Bacterial attachment rate at the soil-water interface was predicted analytically using a combined rate equation, which varies temporally and spatially with respect to changes in solution chemistry. Two-phase Monte Carlo analysis was conducted and an overall input-output correlation coefficient was calculated to quantitatively describe the importance of physiochemical variation on the estimates of attachment rate. Among physical variables, soil particle size has the highest correlation coefficient, followed by porosity of the soil media, bacterial size and flow velocity. Among chemical variables, ionic strength has the highest correlation coefficient. A semi-reactive microbial transport model was developed within HP1 (HYDRUS1D-PHREEQC) and applied to column transport experiments with constant and variable solution chemistries. Bacterial attachment rates varied from 9.10×10(-3)min(-1) to 3.71×10(-3)min(-1) due to mixing of synthetic stormwater (SSW) with artificial groundwater (AGW), while bacterial attachment remained constant at 9.10×10(-3)min(-1) in a constant solution chemistry (AGW only). The model matched observed bacterial breakthrough curves well. Although limitations exist in the application of a semi-reactive microbial transport model, this method represents one step towards a more realistic model of bacterial transport in complex microbial-water-soil systems. Copyright © 2013 Elsevier B.V. All rights reserved.
The Role of Learner and Input Variables in Learning Inflectional Morphology
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel
2006-01-01
To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…
Wideband low-noise variable-gain BiCMOS transimpedance amplifier
NASA Astrophysics Data System (ADS)
Meyer, Robert G.; Mack, William D.
1994-06-01
A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Delineation of marine ecosystem zones in the northern Arabian Sea during winter
NASA Astrophysics Data System (ADS)
Shalin, Saleem; Samuelsen, Annette; Korosov, Anton; Menon, Nandini; Backeberg, Björn C.; Pettersson, Lasse H.
2018-03-01
The spatial and temporal variability of marine autotrophic abundance, expressed as chlorophyll concentration, is monitored from space and used to delineate the surface signature of marine ecosystem zones with distinct optical characteristics. An objective zoning method is presented and applied to satellite-derived Chlorophyll a (Chl a) data from the northern Arabian Sea (50-75° E and 15-30° N) during the winter months (November-March). Principal component analysis (PCA) and cluster analysis (CA) were used to statistically delineate the Chl a into zones with similar surface distribution patterns and temporal variability. The PCA identifies principal components of variability and the CA splits these into zones based on similar characteristics. Based on the temporal variability of the Chl a pattern within the study area, the statistical clustering revealed six distinct ecological zones. The obtained zones are related to the Longhurst provinces to evaluate how these compared to established ecological provinces. The Chl a variability within each zone was then compared with the variability of oceanic and atmospheric properties viz. mixed-layer depth (MLD), wind speed, sea-surface temperature (SST), photosynthetically active radiation (PAR), nitrate and dust optical thickness (DOT) as an indication of atmospheric input of iron to the ocean. The analysis showed that in all zones, peak values of Chl a coincided with low SST and deep MLD. The rate of decrease in SST and the deepening of MLD are observed to trigger the algae bloom events in the first four zones. Lagged cross-correlation analysis shows that peak Chl a follows peak MLD and SST minima. The MLD time lag is shorter than the SST lag by 8 days, indicating that the cool surface conditions might have enhanced mixing, leading to increased primary production in the study area. An analysis of monthly climatological nitrate values showed increased concentrations associated with the deepening of the mixed layer. The input of iron seems to be important in both the open-ocean and coastal areas of the northern and north-western parts of the northern Arabian Sea, where the seasonal variability of the Chl a pattern closely follows the variability of iron deposition.
NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
Nonlinear Transfer of Signal and Noise Correlations in Cortical Networks
Lyamzin, Dmitry R.; Barnes, Samuel J.; Donato, Roberta; Garcia-Lazaro, Jose A.; Keck, Tara
2015-01-01
Signal and noise correlations, a prominent feature of cortical activity, reflect the structure and function of networks during sensory processing. However, in addition to reflecting network properties, correlations are also shaped by intrinsic neuronal mechanisms. Here we show that spike threshold transforms correlations by creating nonlinear interactions between signal and noise inputs; even when input noise correlation is constant, spiking noise correlation varies with both the strength and correlation of signal inputs. We characterize these effects systematically in vitro in mice and demonstrate their impact on sensory processing in vivo in gerbils. We also find that the effects of nonlinear correlation transfer on cortical responses are stronger in the synchronized state than in the desynchronized state, and show that they can be reproduced and understood in a model with a simple threshold nonlinearity. Since these effects arise from an intrinsic neuronal property, they are likely to be present across sensory systems and, thus, our results are a critical step toward a general understanding of how correlated spiking relates to the structure and function of cortical networks. PMID:26019325
Casemix funding in rural NSW: exploring the effects of isolation and size.
Hindle, D; Frances, M; Pearse, J
1998-01-01
The New South Wales Department of Health (NSW Health) wishes to make appropriate use of casemix data as inputs to the determination of funding levels for small rural hospitals. However, other factors such as hospital size and degree of isolation might need to be taken into account. The study reported here involved correlation of actual expenditures with those predicted by use of a casemix model alone, across 105 small public hospitals in the State. We then explored the extent to which the correlation could be increased by the addition of distance and isolation variables. It was found that actual costs were highly correlated with those predicted from the casemix data alone, and that the correlation increased when both the distance and the size variables were introduced. However, contrary to expectations, reduced size was associated with reduced costs, and reduced isolation was associated with increased costs. It was concluded that, while the predicted relationships may be present, they are likely to be relatively weak and are probably being masked by other factors not present in the model. In particular, it seems likely that there are variations in severity within the acute admitted patient category which are not fully explained by the casemix instrument used in this study (the DRG classification). We suggest that other terms be introduced to control for this possibility before any further attempt is made to test whether size and distance factors can be identified which work in the expected direction.
Factors affecting quality of social interaction park in Jakarta
NASA Astrophysics Data System (ADS)
Mangunsong, N. I.
2018-01-01
The existence of social interactions park in Jakarta is an oasis in the middle of a concrete jungle. Parks is a response to the need for open space as a place of recreation and community interaction. Often the social interaction parks built by the government does not function as expected, but other functions such as a place to sell, trash, unsafe so be rarely visited by visitors. The purpose of this study was to analyze the factors that affect the quality of social interaction parks in Jakarta by conducting descriptive analysis and correlation analysis of the variables assessment. The results of the analysis can give an idea of social interactions park based on community needs and propose the development of social interactioncity park. The object of study are 25 social interaction parks in 5 municipalities of Jakarta. The method used is descriptive analysis method, correlation analysis using SPSS 19 and using crosstab, chi-square tests. The variables are 5 aspects of Design, Plants composition: Selection type of plant (D); the beauty and harmony (Ind); Maintenance and fertility (P); Cleanliness and Environmental Health (BS); Specificity (Drainage, Multi Function garden, Means, Concern/Mutual cooperation, in dense settlements) (K). The results of analysis show that beauty is the most significant correlation with the value of the park followed by specificity, cleanliness and maintenance. Design was not the most significant variable affecting the quality of the park. The results of this study can be used by the Department of Parks and Cemeteries as input in managing park existing or to be developed and to improve the quality of social interaction park in Jakarta.
Neural pulse frequency modulation of an exponentially correlated Gaussian process
NASA Technical Reports Server (NTRS)
Hutchinson, C. E.; Chon, Y.-T.
1976-01-01
The effect of NPFM (Neural Pulse Frequency Modulation) on a stationary Gaussian input, namely an exponentially correlated Gaussian input, is investigated with special emphasis on the determination of the average number of pulses in unit time, known also as the average frequency of pulse occurrence. For some classes of stationary input processes where the formulation of the appropriate multidimensional Markov diffusion model of the input-plus-NPFM system is possible, the average impulse frequency may be obtained by a generalization of the approach adopted. The results are approximate and numerical, but are in close agreement with Monte Carlo computer simulation results.
Duval, Benjamin D; Ghimire, Rajan; Hartman, Melannie D; Marsalis, Mark A
2018-01-01
External inputs to agricultural systems can overcome latent soil and climate constraints on production, while contributing to greenhouse gas emissions from fertilizer and water management inefficiencies. Proper crop selection for a given region can lessen the need for irrigation and timing of N fertilizer application with crop N demand can potentially reduce N2O emissions and increase N use efficiency while reducing residual soil N and N leaching. However, increased variability in precipitation is an expectation of climate change and makes predicting biomass and gas flux responses to management more challenging. We used the DayCent model to test hypotheses about input intensity controls on sorghum (Sorghum bicolor (L.) Moench) productivity and greenhouse gas emissions in the southwestern United States under future climate. Sorghum had been previously parameterized for DayCent, but an inverse-modeling via parameter estimation method significantly improved model validation to field data. Aboveground production and N2O flux were more responsive to N additions than irrigation, but simulations with future climate produced lower values for sorghum than current climate. We found positive interactions between irrigation at increased N application for N2O and CO2 fluxes. Extremes in sorghum production under future climate were a function of biomass accumulation trajectories related to daily soil water and mineral N. Root C inputs correlated with soil organic C pools, but overall soil C declined at the decadal scale under current weather while modest gains were simulated under future weather. Scaling biomass and N2O fluxes by unit N and water input revealed that sorghum can be productive without irrigation, and the effect of irrigating crops is difficult to forecast when precipitation is variable within the growing season. These simulation results demonstrate the importance of understanding sorghum production and greenhouse gas emissions at daily scales when assessing annual and decadal-scale management decisions' effects on aspects of arid and semiarid agroecosystem biogeochemistry.
NASA Technical Reports Server (NTRS)
Gottschalck, Jon; Meng, Jesse; Rodel, Matt; Houser, paul
2005-01-01
Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the stocks and fluxes of water (including soil moisture, snow, evaporation, and runoff) and energy (including the temperature of and sensible heat released from the soil) after they arrive on the land surface as precipitation and sunlight. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy and space-time resolution. Hence LSMs have been developed to integrate the available observations with our understanding of the physical processes involved, using powerful computers, in order to map these stocks and fluxes as they change in time. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth's water cycle and climate variability. NASA's Global Land Data Assimilation System (GLDAS) project facilitates testing of several different LSMs with a variety of input datasets (e.g., precipitation, plant type). Precipitation is arguably the most important input to LSMs. Many precipitation datasets have been produced using satellite and rain gauge observations and weather forecast models. In this study, seven different global precipitation datasets were evaluated over the United States, where dense rain gauge networks contribute to reliable precipitation maps. We then used the seven datasets as inputs to GLDAS simulations, so that we could diagnose their impacts on output stocks and fluxes of water. In terms of totals, the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) had the closest agreement with the US rain gauge dataset for all seasons except winter. The CMAP precipitation was also the most closely correlated in time with the rain gauge data during spring, fall, and winter, while the satellitebased estimates performed best in summer. The GLDAS simulations revealed that modeled soil moisture is highly sensitive to precipitation, with differences in spring and summer as large as 45% depending on the choice of precipitation input.
National Assessment of Geologic Carbon Dioxide Storage Resources -- Trends and Interpretations
NASA Astrophysics Data System (ADS)
Buursink, M. L.; Blondes, M. S.; Brennan, S.; Drake, R., II; Merrill, M. D.; Roberts-Ashby, T. L.; Slucher, E. R.; Warwick, P.
2013-12-01
In 2012, the U.S. Geological Survey (USGS) completed an assessment of the technically accessible storage resource (TASR) for carbon dioxide (CO2) in geologic formations underlying the onshore and State waters area of the United States. The formations assessed are at least 3,000 feet (914 meters) below the ground surface. The TASR is an estimate of the CO2 storage resource that may be available for CO2 injection and storage that is based on present-day geologic and hydrologic knowledge of the subsurface and current engineering practices. Individual storage assessment units (SAUs) for 36 basins or study areas were defined on the basis of geologic and hydrologic characteristics outlined in the USGS assessment methodology. The mean national TASR is approximately 3,000 metric gigatons. To augment the release of the assessment, this study reviews input estimates and output results as a part of the resource calculation. Included in this study are a collection of both cross-plots and maps to demonstrate our trends and interpretations. Alongside the assessment, the input estimates were examined for consistency between SAUs and cross-plotted to verify expected trends, such as decreasing storage formation porosity with increasing SAU depth, for instance, and to show a positive correlation between storage formation porosity and permeability estimates. Following the assessment, the output results were examined for correlation with selected input estimates. For example, there exists a positive correlation between CO2 density and the TASR, and between storage formation porosity and the TASR, as expected. These correlations, in part, serve to verify our estimates for the geologic variables. The USGS assessment concluded that the Coastal Plains Region of the eastern and southeastern United States contains the largest storage resource. Within the Coastal Plains Region, the storage resources from the U.S. Gulf Coast study area represent 59 percent of the national CO2 storage capacity. As part of this follow up study, additional maps were generated to show the geographic distribution of the input estimates and the output results across the U.S. For example, the distribution of the SAUs with fresh, saline or mixed formation water quality is shown. Also mapped is the variation in CO2 density as related to basin location and to related properties such as subsurface temperature and pressure. Furthermore, variation in the estimated SAU depth and resulting TASR are shown across the assessment study areas, and these depend on the geologic basin size and filling history. Ultimately, multiple map displays are possible with the complete data set of input estimates and range of reported results. The findings from this study show the effectiveness of the USGS methodology and the robustness of the assessment.
Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment
ERIC Educational Resources Information Center
Alejo, Rafael; Piquer-Píriz, Ana
2016-01-01
The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…
Variable Input and the Acquisition of Plural Morphology
ERIC Educational Resources Information Center
Miller, Karen L.; Schmitt, Cristina
2012-01-01
The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…
Precision digital pulse phase generator
McEwan, T.E.
1996-10-08
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.
Precision digital pulse phase generator
McEwan, Thomas E.
1996-01-01
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.
Reconstruction of organochlorine compound inputs in the Tagus Prodelta.
Mil-Homens, Mário; Vicente, Maria; Grimalt, Joan O; Micaelo, Cristina; Abrantes, Fátima
2016-01-01
Twenty century time-resolved variability of riverine deposits of polychlorobiphenyls (PCBs), DDTs, hexachlorocyclohexanes (HCHs) and hexachlorobenzene (HCB) was studied in three (210)Pb dated sediment cores collected in a depositional shelf area adjacent to the Tagus estuary (the Tagus Prodelta). The geographic and temporal distribution patterns were consistent with discharge of these organochlorine compounds (OCs) in the area associated with the Tagus mouth. Their concentrations were not correlated with the sedimentary total organic carbon. The PCB down-core profiles were dominated by CB138 and CB153 (hexa-CBs) congeners followed by CB180 (hepta-CBs). Principal Component Analysis of the congener distributions of these compounds did not define temporal down-core trends. The ratios of DDT metabolites (p,p'-DDE/p,p'-DDT) were consistent with recent DDT inputs into the environment and/or earlier applications and long-term residence in soils/sediments until these were eroded and remobilized. Copyright © 2015 Elsevier B.V. All rights reserved.
Flight Test of Orthogonal Square Wave Inputs for Hybrid-Wing-Body Parameter Estimation
NASA Technical Reports Server (NTRS)
Taylor, Brian R.; Ratnayake, Nalin A.
2011-01-01
As part of an effort to improve emissions, noise, and performance of next generation aircraft, it is expected that future aircraft will use distributed, multi-objective control effectors in a closed-loop flight control system. Correlation challenges associated with parameter estimation will arise with this expected aircraft configuration. The research presented in this paper focuses on addressing the correlation problem with an appropriate input design technique in order to determine individual control surface effectiveness. This technique was validated through flight-testing an 8.5-percent-scale hybrid-wing-body aircraft demonstrator at the NASA Dryden Flight Research Center (Edwards, California). An input design technique that uses mutually orthogonal square wave inputs for de-correlation of control surfaces is proposed. Flight-test results are compared with prior flight-test results for a different maneuver style.
Investigations in quantum games using EPR-type set-ups
NASA Astrophysics Data System (ADS)
Iqbal, Azhar
2006-04-01
Research in quantum games has flourished during recent years. However, it seems that opinion remains divided about their true quantum character and content. For example, one argument says that quantum games are nothing but 'disguised' classical games and that to quantize a game is equivalent to replacing the original game by a different classical game. The present thesis contributes towards the ongoing debate about quantum nature of quantum games by developing two approaches addressing the related issues. Both approaches take Einstein-Podolsky-Rosen (EPR)-type experiments as the underlying physical set-ups to play two-player quantum games. In the first approach, the players' strategies are unit vectors in their respective planes, with the knowledge of coordinate axes being shared between them. Players perform measurements in an EPR-type setting and their payoffs are defined as functions of the correlations, i.e. without reference to classical or quantum mechanics. Classical bimatrix games are reproduced if the input states are classical and perfectly anti-correlated, as for a classical correlation game. However, for a quantum correlation game, with an entangled singlet state as input, qualitatively different solutions are obtained. The second approach uses the result that when the predictions of a Local Hidden Variable (LHV) model are made to violate the Bell inequalities the result is that some probability measures assume negative values. With the requirement that classical games result when the predictions of a LHV model do not violate the Bell inequalities, our analysis looks at the impact which the emergence of negative probabilities has on the solutions of two-player games which are physically implemented using the EPR-type experiments.
Solving the two-dimensional Fokker-Planck equation for strongly correlated neurons
NASA Astrophysics Data System (ADS)
Deniz, Taşkın; Rotter, Stefan
2017-01-01
Pairs of neurons in brain networks often share much of the input they receive from other neurons. Due to essential nonlinearities of the neuronal dynamics, the consequences for the correlation of the output spike trains are generally not well understood. Here we analyze the case of two leaky integrate-and-fire neurons using an approach which is nonperturbative with respect to the degree of input correlation. Our treatment covers both weakly and strongly correlated dynamics, generalizing previous results based on linear response theory.
'spup' - an R package for uncertainty propagation in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2016-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability, including case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected static and interactive visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
'spup' - an R package for uncertainty propagation analysis in spatial environmental modelling
NASA Astrophysics Data System (ADS)
Sawicka, Kasia; Heuvelink, Gerard
2017-04-01
Computer models have become a crucial tool in engineering and environmental sciences for simulating the behaviour of complex static and dynamic systems. However, while many models are deterministic, the uncertainty in their predictions needs to be estimated before they are used for decision support. Currently, advances in uncertainty propagation and assessment have been paralleled by a growing number of software tools for uncertainty analysis, but none has gained recognition for a universal applicability and being able to deal with case studies with spatial models and spatial model inputs. Due to the growing popularity and applicability of the open source R programming language we undertook a project to develop an R package that facilitates uncertainty propagation analysis in spatial environmental modelling. In particular, the 'spup' package provides functions for examining the uncertainty propagation starting from input data and model parameters, via the environmental model onto model predictions. The functions include uncertainty model specification, stochastic simulation and propagation of uncertainty using Monte Carlo (MC) techniques, as well as several uncertainty visualization functions. Uncertain environmental variables are represented in the package as objects whose attribute values may be uncertain and described by probability distributions. Both numerical and categorical data types are handled. Spatial auto-correlation within an attribute and cross-correlation between attributes is also accommodated for. For uncertainty propagation the package has implemented the MC approach with efficient sampling algorithms, i.e. stratified random sampling and Latin hypercube sampling. The design includes facilitation of parallel computing to speed up MC computation. The MC realizations may be used as an input to the environmental models called from R, or externally. Selected visualization methods that are understandable by non-experts with limited background in statistics can be used to summarize and visualize uncertainty about the measured input, model parameters and output of the uncertainty propagation. We demonstrate that the 'spup' package is an effective and easy tool to apply and can be used in multi-disciplinary research and model-based decision support.
Regenerative braking device with rotationally mounted energy storage means
Hoppie, Lyle O.
1982-03-16
A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2013 CFR
2013-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2011 CFR
2011-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2012 CFR
2012-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2014 CFR
2014-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
Fitting direct covariance structures by the MSTRUCT modeling language of the CALIS procedure.
Yung, Yiu-Fai; Browne, Michael W; Zhang, Wei
2015-02-01
This paper demonstrates the usefulness and flexibility of the general structural equation modelling (SEM) approach to fitting direct covariance patterns or structures (as opposed to fitting implied covariance structures from functional relationships among variables). In particular, the MSTRUCT modelling language (or syntax) of the CALIS procedure (SAS/STAT version 9.22 or later: SAS Institute, 2010) is used to illustrate the SEM approach. The MSTRUCT modelling language supports a direct covariance pattern specification of each covariance element. It also supports the input of additional independent and dependent parameters. Model tests, fit statistics, estimates, and their standard errors are then produced under the general SEM framework. By using numerical and computational examples, the following tests of basic covariance patterns are illustrated: sphericity, compound symmetry, and multiple-group covariance patterns. Specification and testing of two complex correlation structures, the circumplex pattern and the composite direct product models with or without composite errors and scales, are also illustrated by the MSTRUCT syntax. It is concluded that the SEM approach offers a general and flexible modelling of direct covariance and correlation patterns. In conjunction with the use of SAS macros, the MSTRUCT syntax provides an easy-to-use interface for specifying and fitting complex covariance and correlation structures, even when the number of variables or parameters becomes large. © 2014 The British Psychological Society.
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
Whiteway, Matthew R; Butts, Daniel A
2017-03-01
The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.
Zhang, Mingming; Zhao, Zongya; He, Ping; Wang, Jue
2014-01-01
Gap junctions are the mechanism for striatal fast-spiking interneurons (FSIs) to interconnect with each other and play an important role in determining the physiological functioning of the FSIs. To investigate the effect of gap junctions on the firing activities and synchronization of the network for different external inputs, a simple network with least connections and a Newman-Watts small-world network were constructed. Our research shows that both properties of neural networks are related to the conductance of the gap junctions, as well as the frequency and correlation of the external inputs. The effect of gap junctions on the synchronization of network is different for inputs with different frequencies and correlations. The addition of gap junctions can promote the network synchrony in some conditions but suppress it in others, and they can inhibit the firing activities in most cases. Both the firing rate and synchronization of the network increase along with the increase of the electrical coupling strength for inputs with low frequency and high correlation. Thus, the network of coupled FSIs can act as a detector for synchronous synaptic input from cortex and thalamus.
Kanerva's sparse distributed memory with multiple hamming thresholds
NASA Technical Reports Server (NTRS)
Pohja, Seppo; Kaski, Kimmo
1992-01-01
If the stored input patterns of Kanerva's Sparse Distributed Memory (SDM) are highly correlated, utilization of the storage capacity is very low compared to the case of uniformly distributed random input patterns. We consider a variation of SDM that has a better storage capacity utilization for correlated input patterns. This approach uses a separate selection threshold for each physical storage address or hard location. The selection of the hard locations for reading or writing can be done in parallel of which SDM implementations can benefit.
NASA Astrophysics Data System (ADS)
Nooruddin, Hasan A.; Anifowose, Fatai; Abdulraheem, Abdulazeez
2014-03-01
Soft computing techniques are recently becoming very popular in the oil industry. A number of computational intelligence-based predictive methods have been widely applied in the industry with high prediction capabilities. Some of the popular methods include feed-forward neural networks, radial basis function network, generalized regression neural network, functional networks, support vector regression and adaptive network fuzzy inference system. A comparative study among most popular soft computing techniques is presented using a large dataset published in literature describing multimodal pore systems in the Arab D formation. The inputs to the models are air porosity, grain density, and Thomeer parameters obtained using mercury injection capillary pressure profiles. Corrected air permeability is the target variable. Applying developed permeability models in recent reservoir characterization workflow ensures consistency between micro and macro scale information represented mainly by Thomeer parameters and absolute permeability. The dataset was divided into two parts with 80% of data used for training and 20% for testing. The target permeability variable was transformed to the logarithmic scale as a pre-processing step and to show better correlations with the input variables. Statistical and graphical analysis of the results including permeability cross-plots and detailed error measures were created. In general, the comparative study showed very close results among the developed models. The feed-forward neural network permeability model showed the lowest average relative error, average absolute relative error, standard deviations of error and root means squares making it the best model for such problems. Adaptive network fuzzy inference system also showed very good results.
Stylus/tablet user input device for MRI heart wall segmentation: efficiency and ease of use.
Taslakian, Bedros; Pires, Antonio; Halpern, Dan; Babb, James S; Axel, Leon
2018-05-02
To determine whether use of a stylus user input device (UID) would be superior to a mouse for CMR segmentation. Twenty-five consecutive clinical cardiac magnetic resonance (CMR) examinations were selected. Image analysis was independently performed by four observers. Manual tracing of left (LV) and right (RV) ventricular endocardial contours was performed twice in 10 randomly assigned sessions, each session using only one UID. Segmentation time and the ventricular function variables were recorded. The mean segmentation time and time reduction were calculated for each method. Intraclass correlation coefficients (ICC) and Bland-Altman plots of function variables were used to assess intra- and interobserver variability and agreement between methods. Observers completed a Likert-type questionnaire. The mean segmentation time (in seconds) was significantly less with the stylus compared to the mouse, averaging 206±108 versus 308±125 (p<0.001) and 225±140 versus 353±162 (p<0.001) for LV and RV segmentation, respectively. The intra- and interobserver agreement rates were excellent (ICC≥0.75) regardless of the UID. There was an excellent agreement between measurements derived from manual segmentation using different UIDs (ICC≥0.75), with few exceptions. Observers preferred the stylus. The study shows a significant reduction in segmentation time using the stylus, a subjective preference, and excellent agreement between the methods. • Using a stylus for MRI ventricular segmentation is faster compared to mouse • A stylus is easier to use and results in less fatigue • There is excellent agreement between stylus and mouse UIDs.
Not All Children Agree: Acquisition of Agreement when the Input Is Variable
ERIC Educational Resources Information Center
Miller, Karen
2012-01-01
In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…
Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883
Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.
Two SPSS programs for interpreting multiple regression results.
Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo
2010-02-01
When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.
Estuary-ocean connectivity: fast physics, slow biology.
Raimonet, Mélanie; Cloern, James E
2017-06-01
Estuaries are connected to both land and ocean so their physical, chemical, and biological dynamics are influenced by climate patterns over watersheds and ocean basins. We explored climate-driven oceanic variability as a source of estuarine variability by comparing monthly time series of temperature and chlorophyll-a inside San Francisco Bay with those in adjacent shelf waters of the California Current System (CCS) that are strongly responsive to wind-driven upwelling. Monthly temperature fluctuations inside and outside the Bay were synchronous, but their correlations weakened with distance from the ocean. These results illustrate how variability of coastal water temperature (and associated properties such as nitrate and oxygen) propagates into estuaries through fast water exchanges that dissipate along the estuary. Unexpectedly, there was no correlation between monthly chlorophyll-a variability inside and outside the Bay. However, at the annual scale Bay chlorophyll-a was significantly correlated with the Spring Transition Index (STI) that sets biological production supporting fish recruitment in the CCS. Wind forcing of the CCS shifted in the late 1990s when the STI advanced 40 days. This shift was followed, with lags of 1-3 years, by 3- to 19-fold increased abundances of five ocean-produced demersal fish and crustaceans and 2.5-fold increase of summer chlorophyll-a in the Bay. These changes reflect a slow biological process of estuary-ocean connectivity operating through the immigration of fish and crustaceans that prey on bivalves, reduce their grazing pressure, and allow phytoplankton biomass to build. We identified clear signals of climate-mediated oceanic variability in this estuary and discovered that the response patterns vary with the process of connectivity and the timescale of ocean variability. This result has important implications for managing nutrient inputs to estuaries connected to upwelling systems, and for assessing their responses to changing patterns of upwelling timing and intensity as the planet continues to warm. © 2016 Published by John Wiley & Sons Ltd This article has been contributed to by US Government employees and their work is in the public domain in the USA.
The human motor neuron pools receive a dominant slow‐varying common synaptic input
Negro, Francesco; Yavuz, Utku Şükrü
2016-01-01
Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459
Multiwavelength variability properties of Fermi blazar S5 0716+714
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liao, N. H.; Bai, J. M.; Liu, H. T.
S5 0716+714 is a typical BL Lacertae object. In this paper we present the analysis and results of long-term simultaneous observations in the radio, near-infrared, optical, X-ray, and γ-ray bands, together with our own photometric observations for this source. The light curves show that the variability amplitudes in γ-ray and optical bands are larger than those in the hard X-ray and radio bands and that the spectral energy distribution (SED) peaks move to shorter wavelengths when the source becomes brighter, which is similar to other blazars, i.e., more variable at wavelengths shorter than the SED peak frequencies. Analysis shows thatmore » the characteristic variability timescales in the 14.5 GHz, the optical, the X-ray, and the γ-ray bands are comparable to each other. The variations of the hard X-ray and 14.5 GHz emissions are correlated with zero lag, and so are the V band and γ-ray variations, which are consistent with the leptonic models. Coincidences of γ-ray and optical flares with a dramatic change of the optical polarization are detected. Hadronic models do not have the same natural explanation for these observations as the leptonic models. A strong optical flare correlating a γ-ray flare whose peak flux is lower than the average flux is detected. The leptonic model can explain this variability phenomenon through simultaneous SED modeling. Different leptonic models are distinguished by average SED modeling. The synchrotron plus synchrotron self-Compton (SSC) model is ruled out because of the extreme input parameters. Scattering of external seed photons, such as the hot-dust or broad-line region emission, and the SSC process are probably both needed to explain the γ-ray emission of S5 0716+714.« less
Groundwater Variability in a Sandstone Catchment and Linkages with Large-scale Climatic Circulatio
NASA Astrophysics Data System (ADS)
Hannah, D. M.; Lavers, D. A.; Bradley, C.
2015-12-01
Groundwater is a crucial water resource that sustains river ecosystems and provides public water supply. Furthermore, during periods of prolonged high rainfall, groundwater-dominated catchments can be subject to protracted flooding. Climate change and associated projected increases in the frequency and intensity of hydrological extremes have implications for groundwater levels. This study builds on previous research undertaken on a Chalk catchment by investigating groundwater variability in a UK sandstone catchment: the Tern in Shropshire. In contrast to the Chalk, sandstone is characterised by a more lagged response to precipitation inputs; and, as such, it is important to determine the groundwater behaviour and its links with the large-scale climatic circulation to improve process understanding of recharge, groundwater level and river flow responses to hydroclimatological drivers. Precipitation, river discharge and groundwater levels for borehole sites in the Tern basin over 1974-2010 are analysed as the target variables; and we use monthly gridded reanalysis data from the Twentieth Century Reanalysis Project (20CR). First, groundwater variability is evaluated and associations with precipitation / discharge are explored using monthly concurrent and lagged correlation analyses. Second, gridded 20CR reanalysis data are used in composite and correlation analyses to identify the regions of strongest climate-groundwater association. Results show that reasonably strong climate-groundwater connections exist in the Tern basin, with a several months lag. These lags are associated primarily with the time taken for recharge waters to percolate through to the groundwater table. The uncovered patterns improve knowledge of large-scale climate forcing of groundwater variability and may provide a basis to inform seasonal prediction of groundwater levels, which would be useful for strategic water resource planning.
Novel approach for streamflow forecasting using a hybrid ANFIS-FFA model
NASA Astrophysics Data System (ADS)
Yaseen, Zaher Mundher; Ebtehaj, Isa; Bonakdari, Hossein; Deo, Ravinesh C.; Danandeh Mehr, Ali; Mohtar, Wan Hanna Melini Wan; Diop, Lamine; El-shafie, Ahmed; Singh, Vijay P.
2017-11-01
The present study proposes a new hybrid evolutionary Adaptive Neuro-Fuzzy Inference Systems (ANFIS) approach for monthly streamflow forecasting. The proposed method is a novel combination of the ANFIS model with the firefly algorithm as an optimizer tool to construct a hybrid ANFIS-FFA model. The results of the ANFIS-FFA model is compared with the classical ANFIS model, which utilizes the fuzzy c-means (FCM) clustering method in the Fuzzy Inference Systems (FIS) generation. The historical monthly streamflow data for Pahang River, which is a major river system in Malaysia that characterized by highly stochastic hydrological patterns, is used in the study. Sixteen different input combinations with one to five time-lagged input variables are incorporated into the ANFIS-FFA and ANFIS models to consider the antecedent seasonal variations in historical streamflow data. The mean absolute error (MAE), root mean square error (RMSE) and correlation coefficient (r) are used to evaluate the forecasting performance of ANFIS-FFA model. In conjunction with these metrics, the refined Willmott's Index (Drefined), Nash-Sutcliffe coefficient (ENS) and Legates and McCabes Index (ELM) are also utilized as the normalized goodness-of-fit metrics. Comparison of the results reveals that the FFA is able to improve the forecasting accuracy of the hybrid ANFIS-FFA model (r = 1; RMSE = 0.984; MAE = 0.364; ENS = 1; ELM = 0.988; Drefined = 0.994) applied for the monthly streamflow forecasting in comparison with the traditional ANFIS model (r = 0.998; RMSE = 3.276; MAE = 1.553; ENS = 0.995; ELM = 0.950; Drefined = 0.975). The results also show that the ANFIS-FFA is not only superior to the ANFIS model but also exhibits a parsimonious modelling framework for streamflow forecasting by incorporating a smaller number of input variables required to yield the comparatively better performance. It is construed that the FFA optimizer can thus surpass the accuracy of the traditional ANFIS model in general, and is able to remove the false (inaccurately) forecasted data in the ANFIS model for extremely low flows. The present results have wider implications not only for streamflow forecasting purposes, but also for other hydro-meteorological forecasting variables requiring only the historical data input data, and attaining a greater level of predictive accuracy with the incorporation of the FFA algorithm as an optimization tool in an ANFIS model.
Achromatical Optical Correlator
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin; Liu, Hua-Kuang
1989-01-01
Signal-to-noise ratio exceeds that of monochromatic correlator. Achromatical optical correlator uses multiple-pinhole diffraction of dispersed white light to form superposed multiple correlations of input and reference images in output plane. Set of matched spatial filters made by multiple-exposure holographic process, each exposure using suitably-scaled input image and suitable angle of reference beam. Recording-aperture mask translated to appropriate horizontal position for each exposure. Noncoherent illumination suitable for applications involving recognition of color and determination of scale. When fully developed achromatical correlators will be useful for recognition of patterns; for example, in industrial inspection and search for selected features in aerial photographs.
Parsons, Jessica E; Cain, Charles A; Fowlkes, J Brian
2007-03-01
Spatial variability in acoustic backscatter is investigated as a potential feedback metric for assessment of lesion morphology during cavitation-mediated mechanical tissue disruption ("histotripsy"). A 750-kHz annular array was aligned confocally with a 4.5 MHz passive backscatter receiver during ex vivo insonation of porcine myocardium. Various exposure conditions were used to elicit a range of damage morphologies and backscatter characteristics [pulse duration = 14 micros, pulse repetition frequency (PRF) = 0.07-3.1 kHz, average I(SPPA) = 22-44 kW/cm2]. Variability in backscatter spatial localization was quantified by tracking the lag required to achieve peak correlation between sequential RF A-lines received. Mean spatial variability was observed to be significantly higher when damage morphology consisted of mechanically disrupted tissue homogenate versus mechanically intact coagulation necrosis (2.35 +/- 1.59 mm versus 0.067 +/- 0.054 mm, p < 0.025). Statistics from these variability distributions were used as the basis for selecting a threshold variability level to identify the onset of homogenate formation via an abrupt, sustained increase in spatially dynamic backscatter activity. Specific indices indicative of the state of the homogenization process were quantified as a function of acoustic input conditions. The prevalence of backscatter spatial variability was observed to scale with the amount of homogenate produced for various PRFs and acoustic intensities.
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Oza, Nikunj C.; Clancy, Daniel (Technical Monitor)
2001-01-01
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many pattern recognition problems. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers. Therefore, reducing those correlations while keeping the classifiers' performance levels high is an important area of research. In this article, we explore input decimation (ID), a method which selects feature subsets for their ability to discriminate among the classes and uses them to decouple the base classifiers. We provide a summary of the theoretical benefits of correlation reduction, along with results of our method on two underwater sonar data sets, three benchmarks from the Probenl/UCI repositories, and two synthetic data sets. The results indicate that input decimated ensembles (IDEs) outperform ensembles whose base classifiers use all the input features; randomly selected subsets of features; and features created using principal components analysis, on a wide range of domains.
INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE
Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval
2008-01-01
SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077
Synchronization properties of coupled chaotic neurons: The role of random shared input
NASA Astrophysics Data System (ADS)
Kumar, Rupesh; Bilal, Shakir; Ramaswamy, Ram
2016-06-01
Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag-synchronous states, and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.
Synchronization properties of coupled chaotic neurons: The role of random shared input
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kumar, Rupesh; Bilal, Shakir; Ramaswamy, Ram
Spike-time correlations of neighbouring neurons depend on their intrinsic firing properties as well as on the inputs they share. Studies have shown that periodically firing neurons, when subjected to random shared input, exhibit asynchronicity. Here, we study the effect of random shared input on the synchronization of weakly coupled chaotic neurons. The cases of so-called electrical and chemical coupling are both considered, and we observe a wide range of synchronization behaviour. When subjected to identical shared random input, there is a decrease in the threshold coupling strength needed for chaotic neurons to synchronize in-phase. The system also supports lag–synchronous states,more » and for these, we find that shared input can cause desynchronization. We carry out a master stability function analysis for a network of such neurons and show agreement with the numerical simulations. The contrasting role of shared random input for complete and lag synchronized neurons is useful in understanding spike-time correlations observed in many areas of the brain.« less
Rausch, Annika; Zhang, Wei; Haak, Koen V; Mennes, Maarten; Hermans, Erno J; van Oort, Erik; van Wingen, Guido; Beckmann, Christian F; Buitelaar, Jan K; Groen, Wouter B
2016-01-01
Amygdala dysfunction is hypothesized to underlie the social deficits observed in autism spectrum disorders (ASD). However, the neurobiological basis of this hypothesis is underspecified because it is unknown whether ASD relates to abnormalities of the amygdaloid input or output nuclei. Here, we investigated the functional connectivity of the amygdaloid social-perceptual input nuclei and emotion-regulation output nuclei in ASD versus controls. We collected resting state functional magnetic resonance imaging (fMRI) data, tailored to provide optimal sensitivity in the amygdala as well as the neocortex, in 20 adolescents and young adults with ASD and 25 matched controls. We performed a regular correlation analysis between the entire amygdala (EA) and the whole brain and used a partial correlation analysis to investigate whole-brain functional connectivity uniquely related to each of the amygdaloid subregions. Between-group comparison of regular EA correlations showed significantly reduced connectivity in visuospatial and superior parietal areas in ASD compared to controls. Partial correlation analysis revealed that this effect was driven by the left superficial and right laterobasal input subregions, but not the centromedial output nuclei. These results indicate reduced connectivity of specifically the amygdaloid sensory input channels in ASD, suggesting that abnormal amygdalo-cortical connectivity can be traced down to the socio-perceptual pathways.
Thomas, R.E.
1959-01-20
An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses
NASA Astrophysics Data System (ADS)
Torréton, Jean-Pascal; Rochelle-Newall, Emma; Jouon, Aymeric; Faure, Vincent; Jacquet, Séverine; Douillet, Pascal
2007-09-01
Hydrodynamic modeling can be used to spatially characterize water renewal rates in coastal ecosystems. Using a hydrodynamic model implemented over the semi-enclosed Southwest coral lagoon of New Caledonia, a recent study computed the flushing lag as the minimum time required for a particle coming from outside the lagoon (open ocean) to reach a specific station [Jouon, A., Douillet, P., Ouillon, S., Fraunié, P., 2006. Calculations of hydrodynamic time parameters in a semi-opened coastal zone using a 3D hydrodynamic model. Continental Shelf Research 26, 1395-1415]. Local e -flushing time was calculated as the time requested to reach a local grid mesh concentration of 1/e from the precedent step. Here we present an attempt to connect physical forcing to biogeochemical functioning of this coastal ecosystem. An array of stations, located in the lagoonal channel as well as in several bays under anthropogenic influence, was sampled during three cruises. We then tested the statistical relationships between the distribution of flushing indices and those of biological and chemical variables. Among the variables tested, silicate, chlorophyll a and bacterial biomass production present the highest correlations with flushing indices. Correlations are higher with local e-flushing times than with flushing lags or the sum of these two indices. In the bays, these variables often deviate from the relationships determined in the main lagoon channel. In the three bays receiving significant riverine inputs, silicate is well above the regression line, whereas data from the bay receiving almost insignificant freshwater inputs generally fit the lagoon channel regressions. Moreover, in the three bays receiving important urban and industrial effluents, chlorophyll a and bacterial production of biomass generally display values exceeding the lagoon channel regression trends whereas in the bay under moderate anthropogenic influence values follow the regressions obtained in the lagoon channel. The South West lagoon of New Caledonia can hence be viewed as a coastal mesotrophic ecosystem that is flushed by oligotrophic oceanic waters which subsequently replace the lagoonal waters with water considerably impoverished in resources for microbial growth. This flushing was high enough during the periods of study to influence the distribution of phytoplankton biomass, bacterial production of biomass and silicate concentrations in the lagoon channel as well as in some of the bay areas.
The anatomy of clinical decision-making in multidisciplinary cancer meetings
Soukup, Tayana; Petrides, Konstantinos V.; Lamb, Benjamin W.; Sarkar, Somita; Arora, Sonal; Shah, Sujay; Darzi, Ara; Green, James S. A.; Sevdalis, Nick
2016-01-01
Abstract In the UK, treatment recommendations for patients with cancer are routinely made by multidisciplinary teams in weekly meetings. However, their performance is variable. The aim of this study was to explore the underlying structure of multidisciplinary decision-making process, and examine how it relates to team ability to reach a decision. This is a cross-sectional observational study consisting of 1045 patient reviews across 4 multidisciplinary cancer teams from teaching and community hospitals in London, UK, from 2010 to 2014. Meetings were chaired by surgeons. We used a validated observational instrument (Metric for the Observation of Decision-making in Cancer Multidisciplinary Meetings) consisting of 13 items to assess the decision-making process of each patient discussion. Rated on a 5-point scale, the items measured quality of presented patient information, and contributions to review by individual disciplines. A dichotomous outcome (yes/no) measured team ability to reach a decision. Ratings were submitted to Exploratory Factor Analysis and regression analysis. The exploratory factor analysis produced 4 factors, labeled “Holistic and Clinical inputs” (patient views, psychosocial aspects, patient history, comorbidities, oncologists’, nurses’, and surgeons’ inputs), “Radiology” (radiology results, radiologists’ inputs), “Pathology” (pathology results, pathologists’ inputs), and “Meeting Management” (meeting chairs’ and coordinators’ inputs). A negative cross-loading was observed from surgeons’ input on the fourth factor with a follow-up analysis showing negative correlation (r = −0.19, P < 0.001). In logistic regression, all 4 factors predicted team ability to reach a decision (P < 0.001). Hawthorne effect is the main limitation of the study. The decision-making process in cancer meetings is driven by 4 underlying factors representing the complete patient profile and contributions to case review by all core disciplines. Evidence of dual-task interference was observed in relation to the meeting chairs’ input and their corresponding surgical input into case reviews. PMID:27310981
Rispoli, Matthew; Holt, Janet K.
2017-01-01
Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Cao, Minghua
2017-02-01
The spatial correlation extensively exists in the multiple-input multiple-output (MIMO) free space optical (FSO) communication systems due to the channel fading and the antenna space limitation. Wilkinson's method was utilized to investigate the impact of spatial correlation on the MIMO FSO communication system employing multipulse pulse-position modulation. Simulation results show that the existence of spatial correlation reduces the ergodic channel capacity, and the reception diversity is more competent to resist this kind of performance degradation.
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
Statistical properties of superimposed stationary spike trains.
Deger, Moritz; Helias, Moritz; Boucsein, Clemens; Rotter, Stefan
2012-06-01
The Poisson process is an often employed model for the activity of neuronal populations. It is known, though, that superpositions of realistic, non- Poisson spike trains are not in general Poisson processes, not even for large numbers of superimposed processes. Here we construct superimposed spike trains from intracellular in vivo recordings from rat neocortex neurons and compare their statistics to specific point process models. The constructed superimposed spike trains reveal strong deviations from the Poisson model. We find that superpositions of model spike trains that take the effective refractoriness of the neurons into account yield a much better description. A minimal model of this kind is the Poisson process with dead-time (PPD). For this process, and for superpositions thereof, we obtain analytical expressions for some second-order statistical quantities-like the count variability, inter-spike interval (ISI) variability and ISI correlations-and demonstrate the match with the in vivo data. We conclude that effective refractoriness is the key property that shapes the statistical properties of the superposition spike trains. We present new, efficient algorithms to generate superpositions of PPDs and of gamma processes that can be used to provide more realistic background input in simulations of networks of spiking neurons. Using these generators, we show in simulations that neurons which receive superimposed spike trains as input are highly sensitive for the statistical effects induced by neuronal refractoriness.
Modeling road-cycling performance.
Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S
1995-04-01
This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.
Ultralow-Power Digital Correlator for Microwave Polarimetry
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Hass, K. Joseph
2004-01-01
A recently developed high-speed digital correlator is especially well suited for processing readings of a passive microwave polarimeter. This circuit computes the autocorrelations of, and the cross-correlations among, data in four digital input streams representing samples of in-phase (I) and quadrature (Q) components of two intermediate-frequency (IF) signals, denoted A and B, that are generated in heterodyne reception of two microwave signals. The IF signals arriving at the correlator input terminals have been digitized to three levels (-1,0,1) at a sampling rate up to 500 MHz. Two bits (representing sign and magnitude) are needed to represent the instantaneous datum in each input channel; hence, eight bits are needed to represent the four input signals during any given cycle of the sampling clock. The accumulation (integration) time for the correlation is programmable in increments of 2(exp 8) cycles of the sampling clock, up to a maximum of 2(exp 24) cycles. The basic functionality of the correlator is embodied in 16 correlation slices, each of which contains identical logic circuits and counters (see figure). The first stage of each correlation slice is a logic gate that computes one of the desired correlations (for example, the autocorrelation of the I component of A or the negative of the cross-correlation of the I component of A and the Q component of B). The sampling of the output of the logic gate output is controlled by the sampling-clock signal, and an 8-bit counter increments in every clock cycle when the logic gate generates output. The most significant bit of the 8-bit counter is sampled by a 16-bit counter with a clock signal at 2(exp 8) the frequency of the sampling clock. The 16-bit counter is incremented every time the 8-bit counter rolls over.
Ouyang, Huei-Tau
2017-08-01
Accurate inundation level forecasting during typhoon invasion is crucial for organizing response actions such as the evacuation of people from areas that could potentially flood. This paper explores the ability of nonlinear autoregressive neural networks with exogenous inputs (NARX) to predict inundation levels induced by typhoons. Two types of NARX architecture were employed: series-parallel (NARX-S) and parallel (NARX-P). Based on cross-correlation analysis of rainfall and water-level data from historical typhoon records, 10 NARX models (five of each architecture type) were constructed. The forecasting ability of each model was assessed by considering coefficient of efficiency (CE), relative time shift error (RTS), and peak water-level error (PE). The results revealed that high CE performance could be achieved by employing more model input variables. Comparisons of the two types of model demonstrated that the NARX-S models outperformed the NARX-P models in terms of CE and RTS, whereas both performed exceptionally in terms of PE and without significant difference. The NARX-S and NARX-P models with the highest overall performance were identified and their predictions were compared with those of traditional ARX-based models. The NARX-S model outperformed the ARX-based models in all three indexes, whereas the NARX-P model exhibited comparable CE performance and superior RTS and PE performance.
Roden, John S; Ehleringer, James R
2007-04-01
The carbon and oxygen isotopic composition of tree-ring cellulose was examined in ponderosa pine (Pinus ponderosa Dougl.) trees in the western USA to study seasonal patterns of precipitation inputs. Two sites (California and Oregon) had minimal summer rainfall inputs, whereas a third site (Arizona) received as much as 70% of its annual precipitation during the summer months (North American monsoon). For the Arizona site, both the delta(18)O and delta(13)C values of latewood cellulose increased as the fraction of annual precipitation occurring in the summer (July through September) increased. There were no trends in latewood cellulose delta(18)O with the absolute amount of summer rain at any site. The delta(13)C composition of latewood cellulose declined with increasing total water year precipitation for all sites. Years with below-average total precipitation tended to have a higher proportion of their annual water inputs during the summer months. Relative humidity was negatively correlated with latewood cellulose delta(13)C at all sites. Trees at the Arizona site produced latewood cellulose that was significantly more enriched in (18)O compared with trees at the Oregon or California site, implying a greater reliance on an (18)O-enriched water source. Thus, tree-ring records of cellulose delta(18)O and delta(13)C may provide useful proxy information about seasonal precipitation inputs and the variability and intensity of the North American monsoon.
Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming
2016-01-01
Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.
NASA Astrophysics Data System (ADS)
Pfeil, Thomas; Jordan, Jakob; Tetzlaff, Tom; Grübl, Andreas; Schemmel, Johannes; Diesmann, Markus; Meier, Karlheinz
2016-04-01
High-level brain function, such as memory, classification, or reasoning, can be realized by means of recurrent networks of simplified model neurons. Analog neuromorphic hardware constitutes a fast and energy-efficient substrate for the implementation of such neural computing architectures in technical applications and neuroscientific research. The functional performance of neural networks is often critically dependent on the level of correlations in the neural activity. In finite networks, correlations are typically inevitable due to shared presynaptic input. Recent theoretical studies have shown that inhibitory feedback, abundant in biological neural networks, can actively suppress these shared-input correlations and thereby enable neurons to fire nearly independently. For networks of spiking neurons, the decorrelating effect of inhibitory feedback has so far been explicitly demonstrated only for homogeneous networks of neurons with linear subthreshold dynamics. Theory, however, suggests that the effect is a general phenomenon, present in any system with sufficient inhibitory feedback, irrespective of the details of the network structure or the neuronal and synaptic properties. Here, we investigate the effect of network heterogeneity on correlations in sparse, random networks of inhibitory neurons with nonlinear, conductance-based synapses. Emulations of these networks on the analog neuromorphic-hardware system Spikey allow us to test the efficiency of decorrelation by inhibitory feedback in the presence of hardware-specific heterogeneities. The configurability of the hardware substrate enables us to modulate the extent of heterogeneity in a systematic manner. We selectively study the effects of shared input and recurrent connections on correlations in membrane potentials and spike trains. Our results confirm that shared-input correlations are actively suppressed by inhibitory feedback also in highly heterogeneous networks exhibiting broad, heavy-tailed firing-rate distributions. In line with former studies, cell heterogeneities reduce shared-input correlations. Overall, however, correlations in the recurrent system can increase with the level of heterogeneity as a consequence of diminished effective negative feedback.
Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program
NASA Technical Reports Server (NTRS)
Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.
1981-01-01
The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
Raut, Sangeeta; Raut, Smita; Sharma, Manisha; Srivastav, Chaitanya; Adhikari, Basudam; Sen, Sudip Kumar
2015-09-01
In the present study, artificial neural network (ANN) modelling coupled with particle swarm optimization (PSO) algorithm was used to optimize the process variables for enhanced low density polyethylene (LDPE) degradation by Curvularia lunata SG1. In the non-linear ANN model, temperature, pH, contact time and agitation were used as input variables and polyethylene bio-degradation as the output variable. Further, on application of PSO to the ANN model, the optimum values of the process parameters were as follows: pH = 7.6, temperature = 37.97 °C, agitation rate = 190.48 rpm and incubation time = 261.95 days. A comparison between the model results and experimental data gave a high correlation coefficient ([Formula: see text]). Significant enhancement of LDPE bio-degradation using C. lunata SG1by about 48 % was achieved under optimum conditions. Thus, the novelty of the work lies in the application of combination of ANN-PSO as optimization strategy to enhance the bio-degradation of LDPE.
Multiplexer and time duration measuring circuit
Gray, Jr., James
1980-01-01
A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.
Estimating Unsaturated Zone N Fluxes and Travel Times to Groundwater at Watershed Scales
NASA Astrophysics Data System (ADS)
Liao, L.; Green, C. T.; Harter, T.; Nolan, B. T.; Juckem, P. F.; Shope, C. L.
2016-12-01
Nitrate concentrations in groundwater vary at spatial and temporal scales. Local variability depends on soil properties, unsaturated zone properties, hydrology, reactivity, and other factors. For example, the travel time in the unsaturated zone can cause contaminant responses in aquifers to lag behind changes in N inputs at the land surface, and variable leaching-fractions of applied N fertilizer to groundwater can elevate (or reduce) concentrations in groundwater. In this study, we apply the vertical flux model (VFM) (Liao et al., 2012) to address the importance of travel time of N in the unsaturated zone and its fraction leached from the unsaturated zone to groundwater. The Fox-Wolf-Peshtigo basins, including 34 out of 72 counties in Wisconsin, were selected as the study area. Simulated concentrations of NO3-, N2 from denitrification, O2, and environmental tracers of groundwater age were matched to observations by adjusting parameters for recharge rate, unsaturated zone travel time, fractions of N inputs leached to groundwater, O2 reduction rate, O2 threshold for denitrification, denitrification rate, and dispersivity. Correlations between calibrated parameters and GIS parameters (land use, drainage class and soil properties etc.) were evaluated. Model results revealed a median of recharge rate of 0.11 m/yr, which is comparable with results from three independent estimates of recharge rates in the study area. The unsaturated travel times ranged from 0.2 yr to 25 yr with median of 6.8 yr. The correlation analysis revealed that relationships between VFM parameters and landscape characteristics (GIS parameters) were consistent with expected relationships. Fraction N leached was lower in the vicinity of wetlands and greater in the vicinity of crop lands. Faster unsaturated zone transport in forested areas was consistent with results of studies showing rapid vertical transport in forested soils. Reaction rate coefficients correlated with chemical indicators such as Fe and P concentrations. Overall, the results demonstrate applicability of the VFM at a regional scale, as well as potential to generate N transport estimates continuously across regions based on statistical relationships between VFM model parameters and GIS parameters.
NASA Astrophysics Data System (ADS)
Pervez, M. S.; McNally, A.; Arsenault, K. R.
2017-12-01
Convergence of evidence from different agro-hydrologic sources is particularly important for drought monitoring in data sparse regions. In Africa, a combination of remote sensing and land surface modeling experiments are used to evaluate past, present and future drought conditions. The Famine Early Warning Systems Network (FEWS NET) Land Data Assimilation System (FLDAS) routinely simulates daily soil moisture, evapotranspiration (ET) and other variables over Africa using multiple models and inputs. We found that Noah 3.3, Variable Infiltration Capacity (VIC) 4.1.2, and Catchment Land Surface Model based FLDAS simulations of monthly soil moisture percentile maps captured concurrent drought and water surplus episodes effectively over East Africa. However, the results are sensitive to selection of land surface model and hydrometeorological forcings. We seek to identify sources of uncertainty (input, model, parameter) to eventually improve the accuracy of FLDAS outputs. In absence of in situ data, previous work used European Space Agency Climate Change Initiative Soil Moisture (CCI-SM) data measured from merged active-passive microwave remote sensing to evaluate FLDAS soil moisture, and found that during the high rainfall months of April-May and November-December Noah-based soil moisture correlate well with CCI-SM over the Greater Horn of Africa region. We have found good correlations (r>0.6) for FLDAS Noah 3.3 ET anomalies and Operational Simplified Surface Energy Balance (SSEBop) ET over East Africa. Recently, SSEBop ET estimates (version 4) were improved by implementing a land surface temperature correction factor. We re-evaluate the correlations between FLDAS ET and version 4 SSEBop ET. To further investigate the reasons for differences between models we evaluate FLDAS soil moisture with Advanced Scatterometer and SMAP soil moisture and FLDAS outputs with MODIS and AVHRR normalized difference vegetation index. By exploring longer historic time series and near-real time products we will be aiding convergence of evidence for better understanding of historic drought, improved monitoring and forecasting, and better understanding of uncertainties of water availability estimation over Africa
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Prediction of problematic wine fermentations using artificial neural networks.
Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra
2011-11-01
Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-04-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Simulation of speckle patterns with pre-defined correlation distributions.
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S
2016-03-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques.
Simulation of speckle patterns with pre-defined correlation distributions
Song, Lipei; Zhou, Zhen; Wang, Xueyan; Zhao, Xing; Elson, Daniel S.
2016-01-01
We put forward a method to easily generate a single or a sequence of fully developed speckle patterns with pre-defined correlation distribution by utilizing the principle of coherent imaging. The few-to-one mapping between the input correlation matrix and the correlation distribution between simulated speckle patterns is realized and there is a simple square relationship between the values of these two correlation coefficient sets. This method is demonstrated both theoretically and experimentally. The square relationship enables easy conversion from any desired correlation distribution. Since the input correlation distribution can be defined by a digital matrix or a gray-scale image acquired experimentally, this method provides a convenient way to simulate real speckle-related experiments and to evaluate data processing techniques. PMID:27231589
Optoelectronic associative memory
NASA Technical Reports Server (NTRS)
Chao, Tien-Hsin (Inventor)
1993-01-01
An associative optical memory including an input spatial light modulator (SLM) in the form of an edge enhanced liquid crystal light valve (LCLV) and a pair of memory SLM's in the form of liquid crystal televisions (LCTV's) forms a matrix array of an input image which is cross correlated with a matrix array of stored images. The correlation product is detected and nonlinearly amplified to illuminate a replica of the stored image array to select the stored image correlating with the input image. The LCLV is edge enhanced by reducing the bias frequency and voltage and rotating its orientation. The edge enhancement and nonlinearity of the photodetection improves the orthogonality of the stored image. The illumination of the replicate stored image provides a clean stored image, uncontaminated by the image comparison process.
Background-free balanced optical cross correlator
Nejadmalayeri, Amir Hossein; Kaertner, Franz X
2014-12-23
A balanced optical cross correlator includes an optical waveguide, a first photodiode including a first n-type semiconductor and a first p-type semiconductor positioned about the optical waveguide on a first side of the optical waveguide's point of symmetry, and a second photodiode including a second n-type semiconductor and a second p-type semiconductor positioned about the optical waveguide on a second side of the optical waveguide's point of symmetry. A balanced receiver including first and second inputs is configured to produce an output current or voltage that reflects a difference in currents or voltages, originating from the first and the second photodiodes of the balanced cross correlator and fed to the first input and to the second input of the balanced receiver.
A resampling procedure for generating conditioned daily weather sequences
Clark, Martyn P.; Gangopadhyay, Subhrendu; Brandon, David; Werner, Kevin; Hay, Lauren E.; Rajagopalan, Balaji; Yates, David
2004-01-01
A method is introduced to generate conditioned daily precipitation and temperature time series at multiple stations. The method resamples data from the historical record “nens” times for the period of interest (nens = number of ensemble members) and reorders the ensemble members to reconstruct the observed spatial (intersite) and temporal correlation statistics. The weather generator model is applied to 2307 stations in the contiguous United States and is shown to reproduce the observed spatial correlation between neighboring stations, the observed correlation between variables (e.g., between precipitation and temperature), and the observed temporal correlation between subsequent days in the generated weather sequence. The weather generator model is extended to produce sequences of weather that are conditioned on climate indices (in this case the Niño 3.4 index). Example illustrations of conditioned weather sequences are provided for a station in Arizona (Petrified Forest, 34.8°N, 109.9°W), where El Niño and La Niña conditions have a strong effect on winter precipitation. The conditioned weather sequences generated using the methods described in this paper are appropriate for use as input to hydrologic models to produce multiseason forecasts of streamflow.
Kocovsky, P.M.; Ross, R.M.; Dropkin, D.S.; Campbell, J.M.
2008-01-01
Dams within the Susquehanna River drainage, Pennsylvania, are potential barriers to migration of diadromous fishes, and many are under consideration for removal to facilitate fish passage. To provide useful input for prioritizing dam removal, we examined relations between landscape-scale factors and habitat suitability indices (HSIs) for native diadromous species of the Susquehanna River. We used two different methods (U.S. Fish and Wildlife Service method: Stier and Crance [1985], Ross et al. [1993a, 1993b, 1997], and Pardue [1983]; Pennsylvania State University method: Carline et al. [1994]) to calculate HSIs for several life stages of American shad Alosa sapidissima, alewives Alosa pseudoharengus, and blueback herring Alosa aestivalis and a single HSI for American eels Anguilla rostrata based on habitat variables measured at transects spaced every 5 km on six major Susquehanna River tributaries. Using geographical information systems, we calculated land use and geologic variables upstream from each transect and associated those data with HSIs calculated at each transect. We then performed canonical correlation analysis to determine how HSIs were linked to geologic and land use factors. Canonical correlation analysis identified the proportion of watershed underlain by carbonate rock as a positive correlate of HSIs for all species and life stages except American eels and juvenile blueback herring. We hypothesize that potential mechanisms linking carbonate rock to habitat suitability include increased productivity and buffering capacity. No other consistent patterns of positive or negative correlation between landscape-scale factors and HSIs were evident. This analysis will be useful for prioritizing removal of dams in the Susquehanna River drainage, because it provides a broad perspective on relationships between habitat suitability for diadromous fishes and easily measured landscape factors. This approach can be applied elsewhere to elucidate relationships between fine- and coarse-scale variables and suitability of habitat for fishes. ?? Copyright by the American Fisheries Society 2008.
Innovations in Basic Flight Training for the Indonesian Air Force
1990-12-01
microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Hasani Sangani, Mohammad; Jabbarian Amiri, Bahman; Alizadeh Shabani, Afshin; Sakieh, Yousef; Ashrafi, Sohrab
2015-04-01
Increasing land utilization through diverse forms of human activities, such as agriculture, forestry, urban growth, and industrial development, has led to negative impacts on the water quality of rivers. To find out how catchment attributes, such as land use, hydrologic soil groups, and lithology, can affect water quality variables (Ca(2+), Mg(2+), Na(+), Cl(-), HCO 3 (-) , pH, TDS, EC, SAR), a spatio-statistical approach was applied to 23 catchments in southern basins of the Caspian Sea. All input data layers (digital maps of land use, soil, and lithology) were prepared using geographic information system (GIS) and spatial analysis. Relationships between water quality variables and catchment attributes were then examined by Spearman rank correlation tests and multiple linear regression. Stepwise approach-based multiple linear regressions were developed to examine the relationship between catchment attributes and water quality variables. The areas (%) of marl, tuff, or diorite, as well as those of good-quality rangeland and bare land had negative effects on all water quality variables, while those of basalt, forest land cover were found to contribute to improved river water quality. Moreover, lithological variables showed the greatest most potential for predicting the mean concentration values of water quality variables, and noting that measure of EC and TDS have inversely associated with area (%) of urban land use.
Vestibular blueprint in early vertebrates.
Straka, Hans; Baker, Robert
2013-11-19
Central vestibular neurons form identifiable subgroups within the boundaries of classically outlined octavolateral nuclei in primitive vertebrates that are distinct from those processing lateral line, electrosensory, and auditory signals. Each vestibular subgroup exhibits a particular morpho-physiological property that receives origin-specific sensory inputs from semicircular canal and otolith organs. Behaviorally characterized phenotypes send discrete axonal projections to extraocular, spinal, and cerebellar targets including other ipsi- and contralateral vestibular nuclei. The anatomical locations of vestibuloocular and vestibulospinal neurons correlate with genetically defined hindbrain compartments that are well conserved throughout vertebrate evolution though some variability exists in fossil and extant vertebrate species. The different vestibular subgroups exhibit a robust sensorimotor signal processing complemented with a high degree of vestibular and visual adaptive plasticity.
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan
2009-01-01
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.
Grayscale Optical Correlator Workbench
NASA Technical Reports Server (NTRS)
Hanan, Jay; Zhou, Hanying; Chao, Tien-Hsin
2006-01-01
Grayscale Optical Correlator Workbench (GOCWB) is a computer program for use in automatic target recognition (ATR). GOCWB performs ATR with an accurate simulation of a hardware grayscale optical correlator (GOC). This simulation is performed to test filters that are created in GOCWB. Thus, GOCWB can be used as a stand-alone ATR software tool or in combination with GOC hardware for building (target training), testing, and optimization of filters. The software is divided into three main parts, denoted filter, testing, and training. The training part is used for assembling training images as input to a filter. The filter part is used for combining training images into a filter and optimizing that filter. The testing part is used for testing new filters and for general simulation of GOC output. The current version of GOCWB relies on the mathematical software tools from MATLAB binaries for performing matrix operations and fast Fourier transforms. Optimization of filters is based on an algorithm, known as OT-MACH, in which variables specified by the user are parameterized and the best filter is selected on the basis of an average result for correct identification of targets in multiple test images.
Astray, G; Soto, B; Lopez, D; Iglesias, M A; Mejuto, J C
2016-01-01
Transit data analysis and artificial neural networks (ANNs) have proven to be a useful tool for characterizing and modelling non-linear hydrological processes. In this paper, these methods have been used to characterize and to predict the discharge of Lor River (North Western Spain), 1, 2 and 3 days ahead. Transit data analyses show a coefficient of correlation of 0.53 for a lag between precipitation and discharge of 1 day. On the other hand, temperature and discharge has a negative coefficient of correlation (-0.43) for a delay of 19 days. The ANNs developed provide a good result for the validation period, with R(2) between 0.92 and 0.80. Furthermore, these prediction models have been tested with discharge data from a period 16 years later. Results of this testing period also show a good correlation, with R(2) between 0.91 and 0.64. Overall, results indicate that ANNs are a good tool to predict river discharge with a small number of input variables.
NASA Astrophysics Data System (ADS)
Herrera, J. L.; Rosón, G.; Varela, R. A.; Piedracoba, S.
2008-07-01
The key features of the western Galician shelf hydrography and dynamics are analyzed on a solid statistical and experimental basis. The results allowed us to gather together information dispersed in previous oceanographic works of the region. Empirical orthogonal functions analysis and a canonical correlation analysis were applied to a high-resolution dataset collected from 47 surveys done on a weekly frequency from May 2001 to May 2002. The main results of these analyses are summarized bellow. Salinity, temperature and the meridional component of the residual current are correlated with the relevant local forcings (the meridional coastal wind component and the continental run-off) and with a remote forcing (the meridional temperature gradient at latitude 37°N). About 80% of the salinity and temperature total variability over the shelf, and 37% of the residual meridional current total variability are explained by two EOFs for each variable. Up to 22% of the temperature total variability and 14% of the residual meridional current total variability is devoted to the set up of cross-shore gradients of the thermohaline properties caused by the wind-induced Ekman transport. Up to 11% and 10%, respectively, is related to the variability of the meridional temperature gradient at the Western Iberian Winter Front. About 30% of the temperature total variability can be explained by the development and erosion of the seasonal thermocline and by the seasonal variability of the thermohaline properties of the central waters. This thermocline presented unexpected low salinity values due to the trapping during spring and summer of the high continental inputs from the River Miño recorded in 2001. The low salinity plumes can be traced on the Galician shelf during almost all the annual cycle; they tend to be extended throughout the entire water column under downwelling conditions and concentrate in the surface layer when upwelling favourable winds blow. Our evidences point to the meridional temperature gradient acting as an important controlling factor of the central waters thermohaline properties and in the development and decay of the Iberian Poleward Current.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
Neural Correlates of Sensory Substitution in Vestibular Pathways Following Complete Vestibular Loss
Sadeghi, Soroush G.; Minor, Lloyd B.; Cullen, Kathleen E.
2012-01-01
Sensory substitution is the term typically used in reference to sensory prosthetic devices designed to replace input from one defective modality with input from another modality. Such devices allow an alternative encoding of sensory information that is no longer directly provided by the defective modality in a purposeful and goal-directed manner. The behavioral recovery that follows complete vestibular loss is impressive and has long been thought to take advantage of a natural form of sensory substitution in which head motion information is no longer provided by vestibular inputs, but instead by extra-vestibular inputs such as proprioceptive and motor efference copy signals. Here we examined the neuronal correlates of this behavioral recovery after complete vestibular loss in alert behaving monkeys (Macaca mulata). We show for the first time that extra-vestibular inputs substitute for the vestibular inputs to stabilize gaze at the level of single neurons in the VOR premotor circuitry. The summed weighting of neck proprioceptive and efference copy information was sufficient to explain simultaneously observed behavioral improvements in gaze stability. Furthermore, by altering correspondence between intended and actual head movement we revealed a four-fold increase in the weight of neck motor efference copy signals consistent with the enhanced behavioral recovery observed when head movements are voluntary versus unexpected. Thus, taken together our results provide direct evidence that the substitution by extra-vestibular inputs in vestibular pathways provides a neural correlate for the improvements in gaze stability that are observed following the total loss of vestibular inputs. PMID:23077054
Passive IFF: Autonomous Nonintrusive Rapid Identification of Friendly Assets
NASA Technical Reports Server (NTRS)
Moynihan, Philip; Steenburg, Robert Van; Chao, Tien-Hsin
2004-01-01
A proposed optoelectronic instrument would identify targets rapidly, without need to radiate an interrogating signal, apply identifying marks to the targets, or equip the targets with transponders. The instrument was conceived as an identification, friend or foe (IFF) system in a battlefield setting, where it would be part of a targeting system for weapons, by providing rapid identification for aimed weapons to help in deciding whether and when to trigger them. The instrument could also be adapted to law-enforcement and industrial applications in which it is necessary to rapidly identify objects in view. The instrument would comprise mainly an optical correlator and a neural processor (see figure). The inherent parallel-processing speed and capability of the optical correlator would be exploited to obtain rapid identification of a set of probable targets within a scene of interest and to define regions within the scene for the neural processor to analyze. The neural processor would then concentrate on each region selected by the optical correlator in an effort to identify the target. Depending on whether or not a target was recognized by comparison of its image data with data in an internal database on which the neural processor was trained, the processor would generate an identifying signal (typically, friend or foe ). The time taken for this identification process would be less than the time needed by a human or robotic gunner to acquire a view of, and aim at, a target. An optical correlator that has been under development for several years and that has been demonstrated to be capable of tracking a cruise missile might be considered a prototype of the optical correlator in the proposed IFF instrument. This optical correlator features a 512-by-512-pixel input image frame and operates at an input frame rate of 60 Hz. It includes a spatial light modulator (SLM) for video-to-optical image conversion, a pair of precise lenses to effect Fourier transforms, a filter SLM for digital-to-optical correlation-filter data conversion, and a charge-coupled device (CCD) for detection of correlation peaks. In operation, the input scene grabbed by a video sensor is streamed into the input SLM. Precomputed correlation-filter data files representative of known targets are then downloaded and sequenced into the filter SLM at a rate of 1,000 Hz. When there occurs a match between the input target data and one of the known-target data files, the CCD detects a correlation peak at the location of the target. Distortion- invariant correlation filters from a bank of such filters are then sequenced through the optical correlator for each input frame. The net result is the rapid preliminary recognition of one or a few targets.
Regional assessment of NLEAP NO3-N leaching indices
Wylie, B.K.; Shaffer, M.J.; Hall, M.D.
1995-01-01
Nonpoint source ground water contamination by nitrate nitrogen (NO3-N) leached from agricultural lands can be substantial and increase health risks to humans and animals. Accurate and rapid methods are needed to identify and map localities that have a high potential for contamination of shallow aquifers with NO3-N leached from agriculture. Evaluation of Nitrate Leaching and Economic Analysis Package (NLEAP) indices and input variables across an irrigated agricultural area on an alluvial aquifer in Colorado indicated that all leaching indices tested were more strongly correlated with aquifer NO3-N concentration than with aquifer N mass. Of the indices and variables tested, the NO3-N Leached (NL) index was the NLEAP index most strongly associated with groundwater NO3-N concentration (r2 values from 0.37 to 0.39). NO3-N concentration of the leachate was less well correlated with ground water NO3-N concentration (r2values from 0.21 to 0.22). Stepwise regression analysis indicated that, although inorganic and organic/inorganic fertilizer scenarios had similar r2 values, the Feedlot Indicator (proximity) variable was significant over and above the NO3-N Leached index for the inorganic scenario. The analysis also showed that combination of either Movement Risk Index (MIRI) or NO3-N concentration of the leachate with the NO3-N Leached index leads to an improved regression, which provides insight into area-wide associations between agricultural activities and ground water NO3-N concentration.
150 years of ecosystem evolution in the North Sea - from pristine conditions to acidification
NASA Astrophysics Data System (ADS)
Pätsch, Johannes; Lorkowski, Ina; Kühn, Wilfried; Moll, Andreas; Serna, Alexandra
2010-05-01
The 3-d coupled physical-biogeochemical model ECOHAM was applied to the Northwest European Shelf (47° 41‘ - 63° 53' N, 15° 5' W - 13° 55' E) for the years 1860, 1960 and continuously for the time interval 1970 - 2006. From stable nitrogen isotope analysis in sediment cores of the German Bight in the southeastern part of the North Sea (inner shelf) we found the period before 1860 unaffected by anthropogenic river inputs of nitrogen. After this period the delta15N-ratios significantly increased from ~6 per mil to more than 8 per mil in recent sediments indicating eutrophication by anthropogenic nitrate mainly from intensive agriculture fertilization. We deduced from the successful simulation of delta15N patterns in recent sediments that during pristine conditions nitrogen loads of the main continental rivers were about 10% of the modern input while the deposition of inorganic atmospheric nitrogen was 28% of the recent atmospheric flux. The 1960-sediment exhibited similar delta15N-values as the recent sediment which allows the conclusion that eutrophication in the German Bight predates the 1960 period of rapidly increasing river loads. By comparing model results with observational data in the North Sea we analyzed the variability of simulated carbon fluxes (1970-2006) constituting the so called "shelf pump" which transports atmospheric CO2 via biological fixation, vertical export and advection into the adjacent North Atlantic. Even though the highly variable North Atlantic water-inflow which correlated with the North Atlantic Oscillation Index (NAOI) supplied the northern North Sea with strongly varying nutrient inputs, the interannual variability of the strength of the shelf pump was mainly governed by the variability of the southern basin's biological productivity. The net ecosystem production (NEP) in the southern North Sea varies around zero inducing CO2 exchange with the atmosphere which is near equilibrium. In the northern North Sea the strong positive near-surface NEP support CO2 uptake. The continuous latter effect decreased the pH in this area by 0.09 units over the simulation period (1970-2006) while in the southern area it was variable not showing a significant trend. Beside these biologically induced carbon fluxes physically- and chemically-driven fluxes were studied. While the first ones prominently corresponded with SST variations the second ones reacted on shifts in the carbonate system. Among others these shifts arise from alkalinity variations induced by production and dissolution of biogenic calcite on the shelf. We intensively investigated several model approaches for these processes to find out those applicable for shelf models.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Dale, Philip S; Tosto, Maria Grazia; Hayiou-Thomas, Marianna E; Plomin, Robert
2015-01-01
There are well-established correlations between parental input style and child language development, which have typically been interpreted as evidence that the input style causes, or influences the rate of, changes in child language. We present evidence from a large twin study (TEDS; 8395 pairs for this report) that there are also likely to be both child-to-parent effects and shared genetic effects on parent and child. Self-reported parental language style at child age 3 and age 4 was aggregated into an 'informal language stimulation' factor and a 'corrective feedback' factor at each age; the former was positively correlated with child language concurrently and longitudinally at 3, 4, and 4.5 years, whereas the latter was weakly and negatively correlated. Both parental input factors were moderately heritable, as was child language. Longitudinal bivariate analysis showed that the correlation between the language stimulation factor and child language was significantly and moderately due to shared genes. There is some suggestive evidence from longitudinal phenotypic analysis that the prediction from parental language stimulation to child language includes both evocative and passive gene-environment correlation, with the latter playing a larger role. The reader will understand why correlations between parental language and rate of child language are by themselves ambiguous, and how twin studies can clarify the relationship. The reader will also understand that, based on the present study, at least two aspects of parental language style - informal language stimulation and corrective feedback - have substantial genetic influence, and that for informal language stimulation, a substantial portion of the prediction to child language represents the effect of shared genes on both parent and child. It will also be appreciated that these basic research findings do not imply that parental language input style is unimportant or that interventions cannot be effective. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Dale, Philip S.; Tosto, Maria Grazia; Hayiou-Thomas, Marianna E.; Plomin, Robert
2015-01-01
There are well-established correlations between parental input style and child language development, which have typically been interpreted as evidence that the input style causes, or influences the rate of, changes in child language. We present evidence from a large twin study (TEDS; 8395 pairs for this report) that there are also likely to be both child-to-parent effects and shared genetic effects on parent and child. Self-reported parental language style at child age 3 and age 4 was aggregated into an ‘informal language stimulation’ factor and a ‘corrective feedback’ factor at each age; the former was positively correlated with child language concurrently and longitudinally at 3, 4, and 4.5 years, whereas the latter was weakly and negatively correlated. Both parental input factors were moderately heritable, as was child language. Longitudinal bivariate analysis showed that the correlation between the language stimulation factor and child language was significantly and moderately due to shared genes. There is some suggestive evidence from longitudinal phenotypic analysis that the prediction from parental language stimulation to child language includes both evocative and passive gene–environment correlation, with the latter playing a larger role. Learning outcomes: The reader will understand why correlations between parental language and rate of child language are by themselves ambiguous, and how twin studies can clarify the relationship. The reader will also understand that, based on the present study, at least two aspects of parental language style – informal language stimulation and corrective feedback – have substantial genetic influence, and that for informal language stimulation, a substantial portion of the prediction to child language represents the effect of shared genes on both parent and child. It will also be appreciated that these basic research findings do not imply that parental language input style is unimportant or that interventions cannot be effective. PMID:26277213
Variable Delay Element For Jitter Control In High Speed Data Links
Livolsi, Robert R.
2002-06-11
A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.
High-resolution, regional-scale crop yield simulations for the Southwestern United States
NASA Astrophysics Data System (ADS)
Stack, D. H.; Kafatos, M.; Medvigy, D.; El-Askary, H. M.; Hatzopoulos, N.; Kim, J.; Kim, S.; Prasad, A. K.; Tremback, C.; Walko, R. L.; Asrar, G. R.
2012-12-01
Over the past few decades, there have been many process-based crop models developed with the goal of better understanding the impacts of climate, soils, and management decisions on crop yields. These models simulate the growth and development of crops in response to environmental drivers. Traditionally, process-based crop models have been run at the individual farm level for yield optimization and management scenario testing. Few previous studies have used these models over broader geographic regions, largely due to the lack of gridded high-resolution meteorological and soil datasets required as inputs for these data intensive process-based models. In particular, assessment of regional-scale yield variability due to climate change requires high-resolution, regional-scale, climate projections, and such projections have been unavailable until recently. The goal of this study was to create a framework for extending the Agricultural Production Systems sIMulator (APSIM) crop model for use at regional scales and analyze spatial and temporal yield changes in the Southwestern United States (CA, AZ, and NV). Using the scripting language Python, an automated pipeline was developed to link Regional Climate Model (RCM) output with the APSIM crop model, thus creating a one-way nested modeling framework. This framework was used to combine climate, soil, land use, and agricultural management datasets in order to better understand the relationship between climate variability and crop yield at the regional-scale. Three different RCMs were used to drive APSIM: OLAM, RAMS, and WRF. Preliminary results suggest that, depending on the model inputs, there is some variability between simulated RCM driven maize yields and historical yields obtained from the United States Department of Agriculture (USDA). Furthermore, these simulations showed strong non-linear correlations between yield and meteorological drivers, with critical threshold values for some of the inputs (e.g. minimum and maximum temperature), beyond which the yields were negatively affected. These results are now being used for further regional-scale yield analysis as the aforementioned framework is adaptable to multiple geographic regions and crop types.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-04-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1-2 deg s(-1). After unilateral vestibular injury patients' direction-discrimination thresholds worsen to ∼20 deg s(-1), and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s(-1)). While thresholds showed slight improvement by week 3 (25 deg s(-1)), they never recovered to control values - a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues.
Jamali, Mohsen; Mitchell, Diana E; Dale, Alexis; Carriot, Jerome; Sadeghi, Soroush G; Cullen, Kathleen E
2014-01-01
The vestibular system is responsible for processing self-motion, allowing normal subjects to discriminate the direction of rotational movements as slow as 1–2 deg s−1. After unilateral vestibular injury patients’ direction–discrimination thresholds worsen to ∼20 deg s−1, and despite some improvement thresholds remain substantially elevated following compensation. To date, however, the underlying neural mechanisms of this recovery have not been addressed. Here, we recorded from first-order central neurons in the macaque monkey that provide vestibular information to higher brain areas for self-motion perception. Immediately following unilateral labyrinthectomy, neuronal detection thresholds increased by more than two-fold (from 14 to 30 deg s−1). While thresholds showed slight improvement by week 3 (25 deg s−1), they never recovered to control values – a trend mirroring the time course of perceptual thresholds in patients. We further discovered that changes in neuronal response variability paralleled changes in sensitivity for vestibular stimulation during compensation, thereby causing detection thresholds to remain elevated over time. However, we found that in a subset of neurons, the emergence of neck proprioceptive responses combined with residual vestibular modulation during head-on-body motion led to better neuronal detection thresholds. Taken together, our results emphasize that increases in response variability to vestibular inputs ultimately constrain neural thresholds and provide evidence that sensory substitution with extravestibular (i.e. proprioceptive) inputs at the first central stage of vestibular processing is a neural substrate for improvements in self-motion perception following vestibular loss. Thus, our results provide a neural correlate for the patient benefits provided by rehabilitative strategies that take advantage of the convergence of these multisensory cues. PMID:24366259
A liquid lens switching-based motionless variable fiber-optic delay line
NASA Astrophysics Data System (ADS)
Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz
2018-05-01
We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
New Microwave-Based Missions Applications for Rainfed Crops Characterization
NASA Astrophysics Data System (ADS)
Sánchez, N.; Lopez-Sanchez, J. M.; Arias-Pérez, B.; Valcarce-Diñeiro, R.; Martínez-Fernández, J.; Calvo-Heras, J. M.; Camps, A.; González-Zamora, A.; Vicente-Guijalba, F.
2016-06-01
A multi-temporal/multi-sensor field experiment was conducted within the Soil Moisture Measurement Stations Network of the University of Salamanca (REMEDHUS) in Spain, in order to retrieve useful information from satellite Synthetic Aperture Radar (SAR) and upcoming Global Navigation Satellite Systems Reflectometry (GNSS-R) missions. The objective of the experiment was first to identify which radar observables are most sensitive to the development of crops, and then to define which crop parameters the most affect the radar signal. A wide set of radar variables (backscattering coefficients and polarimetric indicators) acquired by Radarsat-2 were analyzed and then exploited to determine variables characterizing the crops. Field measurements were fortnightly taken at seven cereals plots between February and July, 2015. This work also tried to optimize the crop characterization through Landsat-8 estimations, testing and validating parameters such as the leaf area index, the fraction of vegetation cover and the vegetation water content, among others. Some of these parameters showed significant and relevant correlation with the Landsat-derived Normalized Difference Vegetation Index (R>0.60). Regarding the radar observables, the parameters the best characterized were biomass and height, which may be explored for inversion using SAR data as an input. Moreover, the differences in the correlations found for the different crops under study types suggested a way to a feasible classification of crops.
Freshwater control of ice-rafted debris in the last glacial period at Mono Lake, California, USA
NASA Astrophysics Data System (ADS)
Zimmerman, Susan R. H.; Pearl, Crystal; Hemming, Sidney R.; Tamulonis, Kathryn; Hemming, N. Gary; Searle, Stephanie Y.
2011-09-01
The type section silts of the late Pleistocene Wilson Creek Formation at Mono Lake contain outsized clasts, dominantly well-rounded pebbles and cobbles of Sierran lithologies. Lithic grains > 425 μm show a similar pattern of variability as the > 10 mm clasts visible in the type section, with decreasing absolute abundance in southern and eastern outcrops. The largest concentrations of ice-rafted debris (IRD) occur at 67-57 ka and 46-32 ka, with strong millennial-scale variability, while little IRD is found during the last glacial maximum and deglaciation. Stratigraphic evidence for high lake level during high IRD intervals, and a lack of geomorphic evidence for coincidence of lake and glaciers, strongly suggests that rafting was by shore ice rather than icebergs. Correspondence of carbonate flux and IRD implies that both were mainly controlled by freshwater input, rather than disparate non-climatic controls. Conversely, the lack of IRD during the last glacial maximum and deglacial highstands may relate to secondary controls such as perennial ice cover or sediment supply. High IRD at Mono Lake corresponds to low glacial flour flux in Owens Lake, both correlative to high warm-season insolation. High-resolution, extra-basinal correlation of the millennial peaks awaits greatly improved age models for both records.
Spatial patterns of mixing in the Solomon Sea
NASA Astrophysics Data System (ADS)
Alberty, M. S.; Sprintall, J.; MacKinnon, J.; Ganachaud, A.; Cravatte, S.; Eldin, G.; Germineaud, C.; Melet, A.
2017-05-01
The Solomon Sea is a marginal sea in the southwest Pacific that connects subtropical and equatorial circulation, constricting transport of South Pacific Subtropical Mode Water and Antarctic Intermediate Water through its deep, narrow channels. Marginal sea topography inhibits internal waves from propagating out and into the open ocean, making these regions hot spots for energy dissipation and mixing. Data from two hydrographic cruises and from Argo profiles are employed to indirectly infer mixing from observations for the first time in the Solomon Sea. Thorpe and finescale methods indirectly estimate the rate of dissipation of kinetic energy (ɛ) and indicate that it is maximum in the surface and thermocline layers and decreases by 2-3 orders of magnitude by 2000 m depth. Estimates of diapycnal diffusivity from the observations and a simple diffusive model agree in magnitude but have different depth structures, likely reflecting the combined influence of both diapycnal mixing and isopycnal stirring. Spatial variability of ɛ is large, spanning at least 2 orders of magnitude within isopycnal layers. Seasonal variability of ɛ reflects regional monsoonal changes in large-scale oceanic and atmospheric conditions with ɛ increased in July and decreased in March. Finally, tide power input and topographic roughness are well correlated with mean spatial patterns of mixing within intermediate and deep isopycnals but are not clearly correlated with thermocline mixing patterns.
A radial map of multi-whisker correlation selectivity in the rat barrel cortex
Estebanez, Luc; Bertherat, Julien; Shulz, Daniel E.; Bourdieu, Laurent; Léger, Jean- François
2016-01-01
In the barrel cortex, several features of single-whisker stimuli are organized in functional maps. The barrel cortex also encodes spatio-temporal correlation patterns of multi-whisker inputs, but so far the cortical mapping of neurons tuned to such input statistics is unknown. Here we report that layer 2/3 of the rat barrel cortex contains an additional functional map based on neuronal tuning to correlated versus uncorrelated multi-whisker stimuli: neuron responses to uncorrelated multi-whisker stimulation are strongest above barrel centres, whereas neuron responses to correlated and anti-correlated multi-whisker stimulation peak above the barrel–septal borders, forming rings of multi-whisker synchrony-preferring cells. PMID:27869114
A radial map of multi-whisker correlation selectivity in the rat barrel cortex.
Estebanez, Luc; Bertherat, Julien; Shulz, Daniel E; Bourdieu, Laurent; Léger, Jean-François
2016-11-21
In the barrel cortex, several features of single-whisker stimuli are organized in functional maps. The barrel cortex also encodes spatio-temporal correlation patterns of multi-whisker inputs, but so far the cortical mapping of neurons tuned to such input statistics is unknown. Here we report that layer 2/3 of the rat barrel cortex contains an additional functional map based on neuronal tuning to correlated versus uncorrelated multi-whisker stimuli: neuron responses to uncorrelated multi-whisker stimulation are strongest above barrel centres, whereas neuron responses to correlated and anti-correlated multi-whisker stimulation peak above the barrel-septal borders, forming rings of multi-whisker synchrony-preferring cells.
Entangling the Whole by Beam Splitting a Part.
Croal, Callum; Peuntinger, Christian; Chille, Vanessa; Marquardt, Christoph; Leuchs, Gerd; Korolkova, Natalia; Mišta, Ladislav
2015-11-06
A beam splitter is a basic linear optical element appearing in many optics experiments and is frequently used as a continuous-variable entangler transforming a pair of input modes from a separable Gaussian state into an entangled state. However, a beam splitter is a passive operation that can create entanglement from Gaussian states only under certain conditions. One such condition is that the input light is suitably squeezed. We demonstrate, experimentally, that a beam splitter can create entanglement even from modes which do not possess such a squeezing provided that they are correlated to, but not entangled with, a third mode. Specifically, we show that a beam splitter can create three-mode entanglement by acting on two modes of a three-mode fully separable Gaussian state without entangling the two modes themselves. This beam splitter property is a key mechanism behind the performance of the protocol for entanglement distribution by separable states. Moreover, the property also finds application in collaborative quantum dense coding in which decoding of transmitted information is assisted by interference with a mode of the collaborating party.
NASA Astrophysics Data System (ADS)
Palenčár, Rudolf; Sopkuliak, Peter; Palenčár, Jakub; Ďuriš, Stanislav; Suroviak, Emil; Halaj, Martin
2017-06-01
Evaluation of uncertainties of the temperature measurement by standard platinum resistance thermometer calibrated at the defining fixed points according to ITS-90 is a problem that can be solved in different ways. The paper presents a procedure based on the propagation of distributions using the Monte Carlo method. The procedure employs generation of pseudo-random numbers for the input variables of resistances at the defining fixed points, supposing the multivariate Gaussian distribution for input quantities. This allows taking into account the correlations among resistances at the defining fixed points. Assumption of Gaussian probability density function is acceptable, with respect to the several sources of uncertainties of resistances. In the case of uncorrelated resistances at the defining fixed points, the method is applicable to any probability density function. Validation of the law of propagation of uncertainty using the Monte Carlo method is presented on the example of specific data for 25 Ω standard platinum resistance thermometer in the temperature range from 0 to 660 °C. Using this example, we demonstrate suitability of the method by validation of its results.
Are Subject-Specific Musculoskeletal Models Robust to the Uncertainties in Parameter Identification?
Valente, Giordano; Pitto, Lorenzo; Testi, Debora; Seth, Ajay; Delp, Scott L.; Stagni, Rita; Viceconti, Marco; Taddei, Fulvia
2014-01-01
Subject-specific musculoskeletal modeling can be applied to study musculoskeletal disorders, allowing inclusion of personalized anatomy and properties. Independent of the tools used for model creation, there are unavoidable uncertainties associated with parameter identification, whose effect on model predictions is still not fully understood. The aim of the present study was to analyze the sensitivity of subject-specific model predictions (i.e., joint angles, joint moments, muscle and joint contact forces) during walking to the uncertainties in the identification of body landmark positions, maximum muscle tension and musculotendon geometry. To this aim, we created an MRI-based musculoskeletal model of the lower limbs, defined as a 7-segment, 10-degree-of-freedom articulated linkage, actuated by 84 musculotendon units. We then performed a Monte-Carlo probabilistic analysis perturbing model parameters according to their uncertainty, and solving a typical inverse dynamics and static optimization problem using 500 models that included the different sets of perturbed variable values. Model creation and gait simulations were performed by using freely available software that we developed to standardize the process of model creation, integrate with OpenSim and create probabilistic simulations of movement. The uncertainties in input variables had a moderate effect on model predictions, as muscle and joint contact forces showed maximum standard deviation of 0.3 times body-weight and maximum range of 2.1 times body-weight. In addition, the output variables significantly correlated with few input variables (up to 7 out of 312) across the gait cycle, including the geometry definition of larger muscles and the maximum muscle tension in limited gait portions. Although we found subject-specific models not markedly sensitive to parameter identification, researchers should be aware of the model precision in relation to the intended application. In fact, force predictions could be affected by an uncertainty in the same order of magnitude of its value, although this condition has low probability to occur. PMID:25390896
On the significance of δ13C correlations in ancient sediments
NASA Astrophysics Data System (ADS)
Derry, Louis A.
2010-08-01
A graphical analysis of the correlations between δc and ɛTOC was introduced by Rothman et al. (2003) to obtain estimates of the carbon isotopic composition of inputs to the oceans and the organic carbon burial fraction. Applied to Cenozoic data, the method agrees with independent estimates, but with Neoproterozoic data the method yields results that cannot be accommodated with standard models of sedimentary carbon isotope mass balance. We explore the sensitivity of the graphical correlation method and find that the variance ratio between δc and δo is an important control on the correlation of δc and ɛ. If the variance ratio σc/ σo ≥ 1 highly correlated arrays very similar to those obtained from the data are produced from independent random variables. The Neoproterozoic data shows such variance patterns, and the regression parameters for the Neoproterozoic data are statistically indistinguishable from the randomized model at the 95% confidence interval. The projection of the data into δc- ɛ space cannot distinguish between signal and noise, such as post-depositional alteration, under these circumstances. There appears to be no need to invoke unusual carbon cycle dynamics to explain the Neoproterozoic δc- ɛ array. The Cenozoic data have σc/ σo < 1 and the δc vs. ɛ correlation is probably geologically significant, but the analyzed sample size is too small to yield statistically significant results.
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-04-01
This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2 ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.
Key drivers of precipitation isotopes in Windhoek, Namibia (2012-2016)
NASA Astrophysics Data System (ADS)
Kaseke, K. F.; Wang, L.; Wanke, H.
2017-12-01
Southern African climate is characterized by large variability with precipitation model estimates varying by as much as 70% during summer. This difference between model estimates is partly because most models associate precipitation over Southern Africa with moisture inputs from the Indian Ocean while excluding inputs from the Atlantic Ocean. However, growing evidence suggests that the Atlantic Ocean may also contribute significant amounts of moisture to the region. This four-year (2012-2016) study investigates the isotopic composition (δ18O, δ2H and δ17O) of event-scale precipitation events, the key drivers of isotope variations and the origins of precipitation experienced in Windhoek, Namibia. Results indicate large storm-to-storm isotopic variability δ18O (25‰), δ2H (180‰) and δ17O (13‰) over the study period. Univariate analysis showed significant correlations between event precipitation isotopes and local meteorological parameters; lifted condensation level, relative humidity (RH), precipitation amount, average wind speed, surface and air temperature (p < 0.05). The number of significant correlations between local meteorological parameters and monthly isotopes was much lower suggesting loss of information through data aggregation. Nonetheless, the most significant isotope driver at both event and monthly scales was RH, consistent with the semi-arid classification of the site. Multiple linear regression analysis suggested RH, precipitation amount and air temperature were the most significant local drivers of precipitation isotopes accounting for about 50% of the variation implying that about 50% could be attributed to source origins. HYSLPIT trajectories indicated that 78% of precipitation originated from the Indian Ocean while 21% originated from the Atlantic Ocean. Given that three of the four study years were droughts while two of the three drought years were El Niño related, our data also suggests that δ'17O-δ'18O could be a useful tool to differentiate local vs synoptic (El Niño) droughts.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Functional transformations of odor inputs in the mouse olfactory bulb.
Adam, Yoav; Livneh, Yoav; Miyamichi, Kazunari; Groysman, Maya; Luo, Liqun; Mizrahi, Adi
2014-01-01
Sensory inputs from the nasal epithelium to the olfactory bulb (OB) are organized as a discrete map in the glomerular layer (GL). This map is then modulated by distinct types of local neurons and transmitted to higher brain areas via mitral and tufted cells. Little is known about the functional organization of the circuits downstream of glomeruli. We used in vivo two-photon calcium imaging for large scale functional mapping of distinct neuronal populations in the mouse OB, at single cell resolution. Specifically, we imaged odor responses of mitral cells (MCs), tufted cells (TCs) and glomerular interneurons (GL-INs). Mitral cells population activity was heterogeneous and only mildly correlated with the olfactory receptor neuron (ORN) inputs, supporting the view that discrete input maps undergo significant transformations at the output level of the OB. In contrast, population activity profiles of TCs were dense, and highly correlated with the odor inputs in both space and time. Glomerular interneurons were also highly correlated with the ORN inputs, but showed higher activation thresholds suggesting that these neurons are driven by strongly activated glomeruli. Temporally, upon persistent odor exposure, TCs quickly adapted. In contrast, both MCs and GL-INs showed diverse temporal response patterns, suggesting that GL-INs could contribute to the transformations MCs undergo at slow time scales. Our data suggest that sensory odor maps are transformed by TCs and MCs in different ways forming two distinct and parallel information streams.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-01-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell; Schifer, Nicholas
2011-01-01
Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.
Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C
2010-01-01
Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
The Surface Velocity Structure of the Florida Current in a Jet Coordinate Frame
NASA Astrophysics Data System (ADS)
Archer, Matthew R.; Shay, Lynn K.; Johns, William E.
2017-11-01
The structure and variability of the Florida Current between 25° and 26°N are investigated using HF radar ocean current measurements to provide the most detailed view of the surface jet to date. A 2-D jet coordinate analysis is performed to define lateral displacements of the jet in time (meandering), and associated structural variations over a 2 year period (2005-2006). In the jet coordinate frame, core speed has a median value of ˜160 cm s-1 at the central latitude of the array (25.4°N), with a standard deviation (STD) of 35 cm s-1. The jet meanders at timescales of 3-30 days, with a STD of 8 km, and a downstream phase speed of ˜80 km d-1. Meandering accounts for ˜45% of eddy kinetic energy computed in a fixed (geographical) reference frame. Core speed, width, and shear undergo the same dominant 3-30 day variability, plus an annual cycle that matches seasonality of alongshore wind stress. Jet transport at 25.4°N exhibits a different seasonality to volume transport at 27°N, most likely driven by input from the Northwest Providence Channel. Core speed correlates inversely with Miami sea level fluctuations such that a 40 cm s-1 deceleration is associated with a ˜10 cm elevation in sea level, although there is no correlation of sea level to jet meandering or width. Such accurate quantification of the Florida Current's variability is critical to understand and forecast future changes in the climate system of the North Atlantic, as well as local impacts on coastal circulation and sea level variability along south Florida's coastline.
Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2015-09-01
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.
Assessing the performance of a motion tracking system based on optical joint transform correlation
NASA Astrophysics Data System (ADS)
Elbouz, M.; Alfalou, A.; Brosseau, C.; Ben Haj Yahia, N.; Alam, M. S.
2015-08-01
We present an optimized system specially designed for the tracking and recognition of moving subjects in a confined environment (such as an elderly remaining at home). In the first step of our study, we use a VanderLugt correlator (VLC) with an adapted pre-processing treatment of the input plane and a postprocessing of the correlation plane via a nonlinear function allowing us to make a robust decision. The second step is based on an optical joint transform correlation (JTC)-based system (NZ-NL-correlation JTC) for achieving improved detection and tracking of moving persons in a confined space. The proposed system has been found to have significantly superior discrimination and robustness capabilities allowing to detect an unknown target in an input scene and to determine the target's trajectory when this target is in motion. This system offers robust tracking performance of a moving target in several scenarios, such as rotational variation of input faces. Test results obtained using various real life video sequences show that the proposed system is particularly suitable for real-time detection and tracking of moving objects.
UWB delay and multiply receiver
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
A Multivariate Analysis of the Early Dropout Process
ERIC Educational Resources Information Center
Fiester, Alan R.; Rudestam, Kjell E.
1975-01-01
Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)
Progress in Low-Power Digital Microwave Radiometer Technologies
NASA Technical Reports Server (NTRS)
Piepmeier, Jeffrey R.; Kim, Edward J.
2004-01-01
Three component technologies were combined into a digital correlation microwave radiometer. The radiometer comprises a dual-channel X-band superheterodyne receiver, low-power high-speed cross-correlator (HSCC), three-level ADCs, and a correlated noise source (CNS). The HSCC dissipates 10 mW and operates at 500 MHz clock speed. The ADCs are implemented using ECL components and dissipate more power than desired. Thus, a low-power ADC development is underway. The new ADCs arc predicted to dissipated less than 200 mW and operate at 1 GSps with 1.5 GHz of input bandwidth. The CNS provides different input correlation values for calibration of the radiometer. The correlation channel had a null offset of 0.0008. Test results indicate that the correlation channel can be calibrated with 0.09% error in gain.
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Variable ratio regenerative braking device
Hoppie, Lyle O.
1981-12-15
Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.
NASA Technical Reports Server (NTRS)
Chen, B. M.; Saber, A.
1993-01-01
A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.
NASA Astrophysics Data System (ADS)
Yadav, D.; Upadhyay, H. C.
1992-07-01
Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.
NASA Astrophysics Data System (ADS)
Gutiérrez, J. M.; Natxiondo, A.; Nieves, J.; Zabala, A.; Sertucha, J.
2017-04-01
The study of shrinkage incidence variations in nodular cast irons is an important aspect of manufacturing processes. These variations change the feeding requirements on castings and the optimization of risers' size is consequently affected when avoiding the formation of shrinkage defects. The effect of a number of processing variables on the shrinkage size has been studied using a layout specifically designed for this purpose. The β parameter has been defined as the relative volume reduction from the pouring temperature up to the room temperature. It is observed that shrinkage size and β decrease as effective carbon content increases and when inoculant is added in the pouring stream. A similar effect is found when the parameters selected from cooling curves show high graphite nucleation during solidification of cast irons for a given inoculation level. Pearson statistical analysis has been used to analyze the correlations among all involved variables and a group of Bayesian networks have been subsequently built so as to get the best accurate model for predicting β as a function of the input processing variables. The developed models can be used in foundry plants to study the shrinkage incidence variations in the manufacturing process and to optimize the related costs.
Broy, Susan B; Tanner, S Bobo
2011-01-01
Rheumatoid arthritis is the only secondary cause of osteoporosis that is considered independent of bone density in the FRAX(®) algorithm. Although input for rheumatoid arthritis in FRAX(®) is a dichotomous variable, intuitively, one would expect that more severe or active disease would be associated with a greater risk for fracture. We reviewed the literature to determine if specific disease parameters or medication use could be used to better characterize fracture risk in individuals with rheumatoid arthritis. Although many studies document a correlation between various parameters of disease activity or severity and decreased bone density, fewer have associated these variables with fracture risk. We reviewed these studies in detail and concluded that disability measures such as HAQ (Health Assessment Questionnaire) and functional class do correlate with clinical fractures but not morphometric vertebral fractures. One large study found a strong correlation with duration of disease and fracture risk but additional studies are needed to confirm this. There was little evidence to correlate other measures of disease such as DAS (disease activity score), VAS (visual analogue scale), acute phase reactants, use of non-glucocorticoid medications and increased fracture risk. We concluded that FRAX(®) calculations may underestimate fracture probability in patients with impaired functional status from rheumatoid arthritis but that this could not be quantified at this time. At this time, other disease measures cannot be used for fracture prediction. However only a few, mostly small studies addressed other disease parameters and further research is needed. Additional questions for future research are suggested. Copyright © 2011 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Santosa, H.; Hobara, Y.
2017-01-01
The electric field amplitude of very low frequency (VLF) transmitter from Hawaii (NPM) has been continuously recorded at Chofu (CHF), Tokyo, Japan. The VLF amplitude variability indicates lower ionospheric perturbation in the D region (60-90 km altitude range) around the NPM-CHF propagation path. We carried out the prediction of daily nighttime mean VLF amplitude by using Nonlinear Autoregressive with Exogenous Input Neural Network (NARX NN). The NARX NN model, which was built based on the daily input variables of various physical parameters such as stratospheric temperature, total column ozone, cosmic rays, Dst, and Kp indices possess good accuracy during the model building. The fitted model was constructed within the training period from 1 January 2011 to 4 February 2013 by using three algorithms, namely, Bayesian Neural Network (BRANN), Levenberg Marquardt Neural Network (LMANN), and Scaled Conjugate Gradient (SCG). The LMANN has the largest Pearson correlation coefficient (r) of 0.94 and smallest root-mean-square error (RMSE) of 1.19 dB. The constructed models by using LMANN were applied to predict the VLF amplitude from 5 February 2013 to 31 December 2013. As a result the one step (1 day) ahead predicted nighttime VLF amplitude has the r of 0.93 and RMSE of 2.25 dB. We conclude that the model built according to the proposed methodology provides good predictions of the electric field amplitude of VLF waves for NPM-CHF (midlatitude) propagation path.
Visual Predictive Check in Models with Time-Varying Input Function.
Largajolli, Anna; Bertoldo, Alessandra; Campioni, Marco; Cobelli, Claudio
2015-11-01
The nonlinear mixed effects models are commonly used modeling techniques in the pharmaceutical research as they enable the characterization of the individual profiles together with the population to which the individuals belong. To ensure a correct use of them is fundamental to provide powerful diagnostic tools that are able to evaluate the predictive performance of the models. The visual predictive check (VPC) is a commonly used tool that helps the user to check by visual inspection if the model is able to reproduce the variability and the main trend of the observed data. However, the simulation from the model is not always trivial, for example, when using models with time-varying input function (IF). In this class of models, there is a potential mismatch between each set of simulated parameters and the associated individual IF which can cause an incorrect profile simulation. We introduce a refinement of the VPC by taking in consideration a correlation term (the Mahalanobis or normalized Euclidean distance) that helps the association of the correct IF with the individual set of simulated parameters. We investigate and compare its performance with the standard VPC in models of the glucose and insulin system applied on real and simulated data and in a simulated pharmacokinetic/pharmacodynamic (PK/PD) example. The newly proposed VPC performance appears to be better with respect to the standard VPC especially for the models with big variability in the IF where the probability of simulating incorrect profiles is higher.
What Makes the Muscle Twitch: Motor System Connectivity and TMS-Induced Activity.
Volz, Lukas J; Hamada, Masashi; Rothwell, John C; Grefkes, Christian
2015-09-01
Transcranial magnetic stimulation (TMS) of the primary motor cortex (M1) evokes several volleys of corticospinal activity. While the earliest wave (D-wave) originates from axonal activation of cortico-spinal neurons (CSN), later waves (I-waves) result from activation of mono- and polysynaptic inputs to CSNs. Different coil orientations preferentially stimulate cortical elements evoking different outputs: latero-medial-induced current (LM) elicits D-waves and short-latency electromyographic responses (MEPs); posterior-anterior current (PA) evokes early I-waves. Anterior-posterior current (AP) is more variable and tends to recruit later I-waves, featuring longer onset latencies compared with PA-TMS. We tested whether the variability in response to AP-TMS was related to functional connectivity of the stimulated M1 in 20 right-handed healthy subjects who underwent functional magnetic resonance imaging while performing an isometric contraction task. The MEP-latency after AP-TMS (relative to LM-TMS) was strongly correlated with functional connectivity between the stimulated M1 and a network involving cortical premotor areas. This indicates that stronger premotor-M1 connectivity increases the probability that AP-TMS recruits shorter latency input to CSNs. In conclusion, our data strongly support the hypothesis that TMS of M1 activates distinct neuronal pathways depending on the orientation of the stimulation coil. Particularly, AP currents seem to recruit short latency cortico-cortical projections from premotor areas. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Users manual for updated computer code for axial-flow compressor conceptual design
NASA Technical Reports Server (NTRS)
Glassman, Arthur J.
1992-01-01
An existing computer code that determines the flow path for an axial-flow compressor either for a given number of stages or for a given overall pressure ratio was modified for use in air-breathing engine conceptual design studies. This code uses a rapid approximate design methodology that is based on isentropic simple radial equilibrium. Calculations are performed at constant-span-fraction locations from tip to hub. Energy addition per stage is controlled by specifying the maximum allowable values for several aerodynamic design parameters. New modeling was introduced to the code to overcome perceived limitations. Specific changes included variable rather than constant tip radius, flow path inclination added to the continuity equation, input of mass flow rate directly rather than indirectly as inlet axial velocity, solution for the exact value of overall pressure ratio rather than for any value that met or exceeded it, and internal computation of efficiency rather than the use of input values. The modified code was shown to be capable of computing efficiencies that are compatible with those of five multistage compressors and one fan that were tested experimentally. This report serves as a users manual for the revised code, Compressor Spanline Analysis (CSPAN). The modeling modifications, including two internal loss correlations, are presented. Program input and output are described. A sample case for a multistage compressor is included.
Eckley, Chris S; Branfireun, Brian
2009-08-01
This research focuses on mercury (Hg) mobilization in stormwater runoff from an urban roadway. The objectives were to determine: how the transport of surface-derived Hg changes during an event hydrograph; the influence of antecedent dry days on the runoff Hg load; the relationship between total suspended sediments (TSS) and Hg transport, and; the fate of new Hg input in rain and its relative importance to the runoff Hg load. Simulated rain events were used to control variables to elucidate transport processes and a Hg stable isotope was used to trace the fate of Hg inputs in rain. The results showed that Hg concentrations were highest at the beginning of the hydrograph and were predominantly particulate bound (HgP). On average, almost 50% of the total Hg load was transported during the first minutes of runoff, underscoring the importance of the initial runoff on load calculations. Hg accumulated on the road surface during dry periods resulting in the Hg runoff load increasing with antecedent dry days. The Hg concentrations in runoff were significantly correlated with TSS concentrations (mean r(2)=0.94+/-0.09). The results from the isotope experiments showed that the new Hg inputs quickly become associated with the surface particles and that the majority of Hg in runoff is derived from non-event surface-derived sources.
Linking terrestrial P inputs to riverine export across the United ...
Human beings have greatly accelerated phosphorus (P) flows from land to aquatic ecosystems, often resulting in eutrophication, harmful algal blooms, and hypoxia. Although a variety of statistical and mechanistic models have been used to explore the relationship between terrestrial nutrient management and losses to waterways, our understanding of how natural and anthropogenic landscape characteristics mediate losses of P from watersheds lags behind that of nitrogen. The need for higher resolution data is often identified as an important barrier that limits our capacity to predict P loading. In order to address this gap, we constructed spatially explicit datasets of terrestrial P inputs and outputs (fertilizer, confined manure, crop harvest and sewage) across the continental U.S. for 2012. We then examined how these P sources, along with climate, hydrology, and land use, influenced P exports from 72 watersheds as total P (TP) and dissolved inorganic P (DIP) concentrations and yields, and TP fractional export. TP and DIP concentrations and TP yields were best correlated with runoff, but using simple linear regression, we were not able to explain more than 56% of the variance in any of the water quality variables (TP fractional export vs P manure inputs). The lack of clear and strong relationships between contemporary, high-resolution, anthropogenic, terrestrial P and riverine P export at the national scale highlights the fact that a complex suite of factors mediat
Electrical Advantages of Dendritic Spines
Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.
2012-01-01
Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Predicting language outcomes for children learning AAC: Child and environmental factors
Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris
2014-01-01
Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.
Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke
2013-01-01
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.
Interacting with notebook input devices: an analysis of motor performance and users' expertise.
Sutter, Christine; Ziefle, Martina
2005-01-01
In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.
Schulz, Marcus; Neumann, Daniel; Fleet, David M; Matthies, Michael
2013-12-01
During the last decades, marine pollution with anthropogenic litter has become a worldwide major environmental concern. Standardized monitoring of litter since 2001 on 78 beaches selected within the framework of the Convention for the Protection of the Marine Environment of the North-East Atlantic (OSPAR) has been used to identify temporal trends of marine litter. Based on statistical analyses of this dataset a two-part multi-criteria evaluation system for beach litter pollution of the North-East Atlantic and the North Sea is proposed. Canonical correlation analyses, linear regression analyses, and non-parametric analyses of variance were used to identify different temporal trends. A classification of beaches was derived from cluster analyses and served to define different states of beach quality according to abundances of 17 input variables. The evaluation system is easily applicable and relies on the above-mentioned classification and on significant temporal trends implied by significant rank correlations. Copyright © 2013 Elsevier Ltd. All rights reserved.
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
NASA Astrophysics Data System (ADS)
Weber, Juliane; Zachow, Christopher; Witthaut, Dirk
2018-03-01
Wind power generation exhibits a strong temporal variability, which is crucial for system integration in highly renewable power systems. Different methods exist to simulate wind power generation but they often cannot represent the crucial temporal fluctuations properly. We apply the concept of additive binary Markov chains to model a wind generation time series consisting of two states: periods of high and low wind generation. The only input parameter for this model is the empirical autocorrelation function. The two-state model is readily extended to stochastically reproduce the actual generation per period. To evaluate the additive binary Markov chain method, we introduce a coarse model of the electric power system to derive backup and storage needs. We find that the temporal correlations of wind power generation, the backup need as a function of the storage capacity, and the resting time distribution of high and low wind events for different shares of wind generation can be reconstructed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Peng; Barajas-Solano, David A.; Constantinescu, Emil
Wind and solar power generators are commonly described by a system of stochastic ordinary differential equations (SODEs) where random input parameters represent uncertainty in wind and solar energy. The existing methods for SODEs are mostly limited to delta-correlated random parameters (white noise). Here we use the Probability Density Function (PDF) method for deriving a closed-form deterministic partial differential equation (PDE) for the joint probability density function of the SODEs describing a power generator with time-correlated power input. The resulting PDE is solved numerically. A good agreement with Monte Carlo Simulations shows accuracy of the PDF method.
NASA Astrophysics Data System (ADS)
Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban
2017-01-01
Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.
Stottlemyer, R.; Toczydlowski, D.
1999-01-01
The Upper Great Lakes receive large amounts of precipitation-NH4/+ and moderate NO3/- inputs. Increased atmospheric inorganic N input has led to concern about ecosystem capacity to utilize excess N. This paper summarizes a 5-yr study of seasonal N content and flux in precipitation, snowpack, forest floor, and streamwater in order to assess the source of inorganic N outputs in streamflow from a small boreal watershed. Average precipitation N input was 3 kg ha-1 yr-1. The peak snowpack N content averaged 0.55 kg ha-1. The forest floor inorganic N pool was ???2 kg ha-1, eight times larger than monthly precipitation N input. The inorganic N pool size peaked in spring and early summer. Ninety percent of the forest floor inorganic N pool was made up of NH4/+-N. Forest floor inorganic N pools generally increased with temperature. Net N mineralization was 15 kg ha-1 yr-1, and monthly rates peaked in early summer. During winter, the mean monthly net N mineralization rate was twice the peak snowpack N content. Streamwater NO3/- concentration peaked in winter, and inorganic N output peaked in late fall. Beneath the dominant boreal forest species, net N mineralization rates were positively correlated (P < 0.05) with streamwater NO3/- concentrations. Forest floor NO3/- pools beneath alder [Alnus rugosa (Du Roi) Spreng] were positively correlated (P < 0.01) to streamwater NO3/- output. At the watershed mouth, streamwater NO3/- concentrations were positively correlated (P < 0.05) with precipitation NO3/- input and precipitation amount. The relatively small snowpack N content and seasonal precipitation N input compared to forest floor inorganic N pools and net N mineralization rates, the strong ecosystem retention of precipitation N inputs, and the seasonal streamwater NO3/- concentration and output pattern all indicated that little streamwater NO3/- came directly from precipitation or snowmelt.The Upper Great Lakes receive large amounts of precipitation-NH4+ and moderate NO3- inputs. Increased atmospheric inorganic N input has led to concern about ecosystem capacity to utilize excess N. This paper summarizes a 5-yr study of seasonal N content and flux in precipitation, snowpack, forest floor, and streamwater in order to assess the source of inorganic N outputs in streamflow from a small boreal watershed. Average precipitation N input was 3 kg ha-1 yr-1. The peak snowpack N content averaged 0.55 kg ha-1. The forest floor inorganic N pool was ??? 2 kg ha-1, eight times larger than monthly precipitation N input. The inorganic N pool size peaked in spring and early summer. Ninety percent of the forest floor inorganic N pool was made up of NH4+-N. Forest floor inorganic N pools generally increased with temperature. Net N mineralization was 15 kg ha-1 yr-1, and monthly rates peaked in early summer. During winter, the mean monthly net N mineralization rate was twice the peak snowpack N content. Streamwater NO3- concentration peaked in winter, and inorganic N output peaked in late fall. Beneath the dominant boreal forest species, net N mineralization rates were positively correlated (P < 0.05) with streamwater NO3- concentrations. Forest floor NO3- pools beneath alder [Alnus rugosa (Du Roi) Spreng] were positively correlated (P<0.01) to streamwater NO3- output. At the watershed mouth, streamwater NO3- concentrations were positively correlated (P < 0.05) with precipitation NO3- input and precipitation amount. The relatively small snowpack N content and seasonal precipitation N input compared to forest floor inorganic N pools and net N mineralization rates, the strong ecosystem retention of precipitation N inputs, and the seasonal streamwater NO3- concentration and output pattern all indicated that little streamwater NO3- came directly from precipitation or snowmelt.
Arterial input function derived from pairwise correlations between PET-image voxels.
Schain, Martin; Benjaminsson, Simon; Varnäs, Katarina; Forsberg, Anton; Halldin, Christer; Lansner, Anders; Farde, Lars; Varrone, Andrea
2013-07-01
A metabolite corrected arterial input function is a prerequisite for quantification of positron emission tomography (PET) data by compartmental analysis. This quantitative approach is also necessary for radioligands without suitable reference regions in brain. The measurement is laborious and requires cannulation of a peripheral artery, a procedure that can be associated with patient discomfort and potential adverse events. A non invasive procedure for obtaining the arterial input function is thus preferable. In this study, we present a novel method to obtain image-derived input functions (IDIFs). The method is based on calculation of the Pearson correlation coefficient between the time-activity curves of voxel pairs in the PET image to localize voxels displaying blood-like behavior. The method was evaluated using data obtained in human studies with the radioligands [(11)C]flumazenil and [(11)C]AZ10419369, and its performance was compared with three previously published methods. The distribution volumes (VT) obtained using IDIFs were compared with those obtained using traditional arterial measurements. Overall, the agreement in VT was good (∼3% difference) for input functions obtained using the pairwise correlation approach. This approach performed similarly or even better than the other methods, and could be considered in applied clinical studies. Applications to other radioligands are needed for further verification.
Kember, G C; Fenton, G A; Armour, J A; Kalyaniwalla, N
2001-04-01
Regional cardiac control depends upon feedback of the status of the heart from afferent neurons responding to chemical and mechanical stimuli as transduced by an array of sensory neurites. Emerging experimental evidence shows that neural control in the heart may be partially exerted using subthreshold inputs that are amplified by noisy mechanical fluctuations. This amplification is known as aperiodic stochastic resonance (ASR). Neural control in the noisy, subthreshold regime is difficult to see since there is a near absence of any correlation between input and the output, the latter being the average firing (spiking) rate of the neuron. This lack of correlation is unresolved by traditional energy models of ASR since these models are unsuitable for identifying "cause and effect" between such inputs and outputs. In this paper, the "competition between averages" model is used to determine what portion of a noisy, subthreshold input is responsible, on average, for the output of sensory neurons as represented by the Fitzhugh-Nagumo equations. A physiologically relevant conclusion of this analysis is that a nearly constant amount of input is responsible for a spike, on average, and this amount is approximately independent of the firing rate. Hence, correlation measures are generally reduced as the firing rate is lowered even though neural control under this model is actually unaffected.
NASA Technical Reports Server (NTRS)
Orren, L. H.; Ziman, G. M.; Jones, S. C.
1981-01-01
A financial accounting model that incorporates physical and institutional uncertainties was developed for geothermal projects. Among the uncertainties it can handle are well depth, flow rate, fluid temperature, and permit and construction times. The outputs of the model are cumulative probability distributions of financial measures such as capital cost, levelized cost, and profit. These outputs are well suited for use in an investment decision incorporating risk. The model has the powerful feature that conditional probability distribution can be used to account for correlations among any of the input variables. The model has been applied to a geothermal reservoir at Heber, California, for a 45-MW binary electric plant. Under the assumptions made, the reservoir appears to be economically viable.
Using a Bayesian network to predict barrier island geomorphologic characteristics
Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron
2015-01-01
Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.
Characterization of postural control impairment in women with fibromyalgia
Sempere-Rubio, Núria; López-Pascual, Juan; Aguilar-Rodríguez, Marta; Cortés-Amador, Sara; Espí-López, Gemma; Villarrasa-Sapiña, Israel
2018-01-01
The main goal of this cross-sectional study was to detect whether women with fibromyalgia syndrome (FMS) have altered postural control and to study the sensory contribution to postural control. We also explored the possibility that self-induced anxiety and lower limb strength may be related to postural control. For this purpose, 129 women within an age range of 40 to 70 years were enrolled. Eighty of the enrolled women had FMS. Postural control variables, such as Ellipse, Root mean square (RMS) and Sample entropy (SampEn), in both directions (i.e. mediolateral and anteroposterior), were calculated under five different conditions. A force plate was used to register the center of pressure shifts. Furthermore, isometric lower limb strength was recorded with a portable dynamometer and normalized by lean body mass. The results showed that women with FMS have impaired postural control compared with healthy people, as they presented a significant increase in Ellipse and RMS values (p<0.05) and a significant decrease in SampEn in both directions (p<0.05). Postural control also worsens with the gradual alteration of sensory inputs in this population (p<0.05). Performing a stressor dual task only impacts Ellipse in women with FMS (p>0.05). There were no significant correlations between postural control and lower limb strength (p>0.05). Therefore, women with FMS have impaired postural control that is worse when sensory inputs are altered but is not correlated with their lower limb strength. PMID:29723223
Variable synaptic strengths controls the firing rate distribution in feedforward neural networks.
Ly, Cheng; Marsat, Gary
2018-02-01
Heterogeneity of firing rate statistics is known to have severe consequences on neural coding. Recent experimental recordings in weakly electric fish indicate that the distribution-width of superficial pyramidal cell firing rates (trial- and time-averaged) in the electrosensory lateral line lobe (ELL) depends on the stimulus, and also that network inputs can mediate changes in the firing rate distribution across the population. We previously developed theoretical methods to understand how two attributes (synaptic and intrinsic heterogeneity) interact and alter the firing rate distribution in a population of integrate-and-fire neurons with random recurrent coupling. Inspired by our experimental data, we extend these theoretical results to a delayed feedforward spiking network that qualitatively capture the changes of firing rate heterogeneity observed in in-vivo recordings. We demonstrate how heterogeneous neural attributes alter firing rate heterogeneity, accounting for the effect with various sensory stimuli. The model predicts how the strength of the effective network connectivity is related to intrinsic heterogeneity in such delayed feedforward networks: the strength of the feedforward input is positively correlated with excitability (threshold value for spiking) when firing rate heterogeneity is low and is negatively correlated with excitability with high firing rate heterogeneity. We also show how our theory can be used to predict effective neural architecture. We demonstrate that neural attributes do not interact in a simple manner but rather in a complex stimulus-dependent fashion to control neural heterogeneity and discuss how it can ultimately shape population codes.
Bouchet, S; Rodriguez-Gonzalez, P; Bridou, R; Monperrus, M; Tessier, E; Anschutz, P; Guyoneaud, R; Amouroux, D
2013-03-01
Stable isotopic tracer methodologies now allow the evaluation of the reactivity of the endogenous (ambient) and exogenous (added) Hg to further predict the potential effect of Hg inputs in ecosystems. The differential reactivity of endogenous and exogenous Hg was compared in superficial sediments collected in a coastal lagoon (Arcachon Bay) and in an estuary (Adour River) from the Bay of Biscay (SW France). All Hg species (gaseous, aqueous, and solid fraction) and ancillary data were measured during time course slurry experiments under variable redox conditions. The average endogenous methylation yield was higher in the estuarine (1.2 %) than in the lagoonal sediment (0.5 %), although both methylation and demethylation rates were higher in the lagoonal sediment in relation with a higher sulfate-reducing activity. Demethylation was overall more consistent than methylation in both sediments. The endogenous and exogenous Hg behaviors were always correlated but the exogenous inorganic Hg (IHg) partitioning into water was 2.0-4.3 times higher than the endogenous one. Its methylation was just slightly higher (1.4) in the estuarine sediment while the difference in the lagoonal sediment was much larger (3.6). The relative endogenous and exogenous methylation yields were not correlated to IHg partitioning, demonstrating that the bioavailable species distributions were different for the two IHg pools. In both sediments, the exogenous IHg partitioning equaled the endogenous one within a week, while its higher methylation lasted for months. Such results provide an original assessment approach to compare coastal sediment response to Hg inputs.
Xue, Kai; Wu, Liyou; Deng, Ye; He, Zhili; Van Nostrand, Joy; Robertson, Philip G.; Schmidt, Thomas M.
2013-01-01
Various agriculture management practices may have distinct influences on soil microbial communities and their ecological functions. In this study, we utilized GeoChip, a high-throughput microarray-based technique containing approximately 28,000 probes for genes involved in nitrogen (N)/carbon (C)/sulfur (S)/phosphorus (P) cycles and other processes, to evaluate the potential functions of soil microbial communities under conventional (CT), low-input (LI), and organic (ORG) management systems at an agricultural research site in Michigan. Compared to CT, a high diversity of functional genes was observed in LI. The functional gene diversity in ORG did not differ significantly from that of either CT or LI. Abundances of genes encoding enzymes involved in C/N/P/S cycles were generally lower in CT than in LI or ORG, with the exceptions of genes in pathways for lignin degradation, methane generation/oxidation, and assimilatory N reduction, which all remained unchanged. Canonical correlation analysis showed that selected soil (bulk density, pH, cation exchange capacity, total C, C/N ratio, NO3−, NH4+, available phosphorus content, and available potassium content) and crop (seed and whole biomass) variables could explain 69.5% of the variation of soil microbial community composition. Also, significant correlations were observed between NO3− concentration and denitrification genes, NH4+ concentration and ammonification genes, and N2O flux and denitrification genes, indicating a close linkage between soil N availability or process and associated functional genes. PMID:23241975
Finite element model correlation of a composite UAV wing using modal frequencies
NASA Astrophysics Data System (ADS)
Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.
2007-04-01
The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
Sensitivity Analysis of the Integrated Medical Model for ISS Programs
NASA Technical Reports Server (NTRS)
Goodenow, D. A.; Myers, J. G.; Arellano, J.; Boley, L.; Garcia, Y.; Saile, L.; Walton, M.; Kerstman, E.; Reyes, D.; Young, M.
2016-01-01
Sensitivity analysis estimates the relative contribution of the uncertainty in input values to the uncertainty of model outputs. Partial Rank Correlation Coefficient (PRCC) and Standardized Rank Regression Coefficient (SRRC) are methods of conducting sensitivity analysis on nonlinear simulation models like the Integrated Medical Model (IMM). The PRCC method estimates the sensitivity using partial correlation of the ranks of the generated input values to each generated output value. The partial part is so named because adjustments are made for the linear effects of all the other input values in the calculation of correlation between a particular input and each output. In SRRC, standardized regression-based coefficients measure the sensitivity of each input, adjusted for all the other inputs, on each output. Because the relative ranking of each of the inputs and outputs is used, as opposed to the values themselves, both methods accommodate the nonlinear relationship of the underlying model. As part of the IMM v4.0 validation study, simulations are available that predict 33 person-missions on ISS and 111 person-missions on STS. These simulated data predictions feed the sensitivity analysis procedures. The inputs to the sensitivity procedures include the number occurrences of each of the one hundred IMM medical conditions generated over the simulations and the associated IMM outputs: total quality time lost (QTL), number of evacuations (EVAC), and number of loss of crew lives (LOCL). The IMM team will report the results of using PRCC and SRRC on IMM v4.0 predictions of the ISS and STS missions created as part of the external validation study. Tornado plots will assist in the visualization of the condition-related input sensitivities to each of the main outcomes. The outcomes of this sensitivity analysis will drive review focus by identifying conditions where changes in uncertainty could drive changes in overall model output uncertainty. These efforts are an integral part of the overall verification, validation, and credibility review of IMM v4.0.
Bilingualism: A Pearl to Overcome Certain Perils of Cochlear Implants
Humphries, Tom; Kushalnagar, Poorna; Mathur, Gaurav; Napoli, Donna Jo; Padden, Carol; Rathmann, Christian; Smith, Scott
2014-01-01
Cochlear implants (CI) have demonstrated success in improving young deaf children’s speech and low-level speech awareness across a range of auditory functions, but this success is highly variable, and how this success correlates to high-level language development is even more variable. Prevalence on the success rate of CI as an outcome for language development is difficult to obtain because studies vary widely in methodology and variables of interest, and because not all cochlear implant technology (which continues to evolve) is the same. Still, even if the notion of treatment failure is limited narrowly to those who gain no auditory benefit from CI in that they cannot discriminate among ambient noises, the reported treatment failure rate is high enough to call into question the current lack of consideration of alternative approaches to ensure young deaf children’s language development. Recent research has highlighted the risks of delaying language input during critical periods of brain development with concomitant consequences for cognitive and social skills. As a result, we propose that before, during, and after implantation deaf children learn a sign language along with a spoken language to ensure their maximal language development and optimal long-term developmental outcomes. PMID:25419095
A Monte Carlo investigation of thrust imbalance of solid rocket motor pairs
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.; Johnson, J. S., Jr.
1974-01-01
A technique is described for theoretical, statistical evaluation of the thrust imbalance of pairs of solid-propellant rocket motors (SRMs) firing in parallel. Sets of the significant variables, determined as a part of the research, are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs. The performance model is upgraded to include the effects of statistical variations in the ovality and alignment of the motor case and mandrel. Effects of cross-correlations of variables are minimized by selecting for the most part completely independent input variables, over forty in number. The imbalance is evaluated in terms of six time - varying parameters as well as eleven single valued ones which themselves are subject to statistical analysis. A sample study of the thrust imbalance of 50 pairs of 146 in. dia. SRMs of the type to be used on the space shuttle is presented. The FORTRAN IV computer program of the analysis and complete instructions for its use are included. Performance computation time for one pair of SRMs is approximately 35 seconds on the IBM 370/155 using the FORTRAN H compiler.
Statistical downscaling of precipitation using long short-term memory recurrent neural networks
NASA Astrophysics Data System (ADS)
Misra, Saptarshi; Sarkar, Sudeshna; Mitra, Pabitra
2017-11-01
Hydrological impacts of global climate change on regional scale are generally assessed by downscaling large-scale climatic variables, simulated by General Circulation Models (GCMs), to regional, small-scale hydrometeorological variables like precipitation, temperature, etc. In this study, we propose a new statistical downscaling model based on Recurrent Neural Network with Long Short-Term Memory which captures the spatio-temporal dependencies in local rainfall. The previous studies have used several other methods such as linear regression, quantile regression, kernel regression, beta regression, and artificial neural networks. Deep neural networks and recurrent neural networks have been shown to be highly promising in modeling complex and highly non-linear relationships between input and output variables in different domains and hence we investigated their performance in the task of statistical downscaling. We have tested this model on two datasets—one on precipitation in Mahanadi basin in India and the second on precipitation in Campbell River basin in Canada. Our autoencoder coupled long short-term memory recurrent neural network model performs the best compared to other existing methods on both the datasets with respect to temporal cross-correlation, mean squared error, and capturing the extremes.
Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios
2016-01-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Role of updraft velocity in temporal variability of global cloud hydrometeor number
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...
2016-05-16
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less
Role of updraft velocity in temporal variability of global cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios
2016-05-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
NASA Astrophysics Data System (ADS)
Golay, Jean; Kanevski, Mikhaïl
2013-04-01
The present research deals with the exploration and modeling of a complex dataset of 200 measurement points of sediment pollution by heavy metals in Lake Geneva. The fundamental idea was to use multivariate Artificial Neural Networks (ANN) along with geostatistical models and tools in order to improve the accuracy and the interpretability of data modeling. The results obtained with ANN were compared to those of traditional geostatistical algorithms like ordinary (co)kriging and (co)kriging with an external drift. Exploratory data analysis highlighted a great variety of relationships (i.e. linear, non-linear, independence) between the 11 variables of the dataset (i.e. Cadmium, Mercury, Zinc, Copper, Titanium, Chromium, Vanadium and Nickel as well as the spatial coordinates of the measurement points and their depth). Then, exploratory spatial data analysis (i.e. anisotropic variography, local spatial correlations and moving window statistics) was carried out. It was shown that the different phenomena to be modeled were characterized by high spatial anisotropies, complex spatial correlation structures and heteroscedasticity. A feature selection procedure based on General Regression Neural Networks (GRNN) was also applied to create subsets of variables enabling to improve the predictions during the modeling phase. The basic modeling was conducted using a Multilayer Perceptron (MLP) which is a workhorse of ANN. MLP models are robust and highly flexible tools which can incorporate in a nonlinear manner different kind of high-dimensional information. In the present research, the input layer was made of either two (spatial coordinates) or three neurons (when depth as auxiliary information could possibly capture an underlying trend) and the output layer was composed of one (univariate MLP) to eight neurons corresponding to the heavy metals of the dataset (multivariate MLP). MLP models with three input neurons can be referred to as Artificial Neural Networks with EXternal drift (ANNEX). Moreover, the exact number of output neurons and the selection of the corresponding variables were based on the subsets created during the exploratory phase. Concerning hidden layers, no restriction were made and multiple architectures were tested. For each MLP model, the quality of the modeling procedure was assessed by variograms: if the variogram of the residuals demonstrates pure nugget effect and if the level of the nugget exactly corresponds to the nugget value of the theoretical variogram of the corresponding variable, all the structured information has been correctly extracted without overfitting. Finally, it is worth mentioning that simple MLP models are not always able to remove all the spatial correlation structure from the data. In that case, Neural Network Residual Kriging (NNRK) can be carried out and risk assessment can be conducted with Neural Network Residual Simulations (NNRS). Finally, the results of the ANNEX models were compared to those of ordinary (co)kriging and (co)kriging with an external drift. It was shown that the ANNEX models performed better than traditional geostatistical algorithms when the relationship between the variable of interest and the auxiliary predictor was not linear. References Kanevski, M. and Maignan, M. (2004). Analysis and Modelling of Spatial Environmental Data. Lausanne: EPFL Press.
Dhawale, Ashesh K.; Hagiwara, Akari; Bhalla, Upinder S.; Murthy, Venkatesh N.; Albeanu, Dinu F.
2011-01-01
Sensory inputs frequently converge on the brain in a spatially organized manner, often with overlapping inputs to multiple target neurons. Whether the responses of target neurons with common inputs become decorrelated depends on the contribution of local circuit interactions. We addressed this issue in the olfactory system using newly generated transgenic mice expressing channelrhodopsin-2 in all olfactory sensory neurons. By selectively stimulating individual glomeruli with light, we identified mitral/tufted (M/T) cells that receive common input (sister cells). Sister M/T cells had highly correlated responses to odors as measured by average spike rates, but their spike timing in relation to respiration was differentially altered. In contrast, non-sister M/T cells correlated poorly on both these measures. We suggest that sister M/T cells carry two different channels of information: average activity representing shared glomerular input, and phase-specific information that refines odor representations and is substantially independent for sister M/T cells. PMID:20953197
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
NASA Technical Reports Server (NTRS)
Briggs, Maxwell H.; Schifer, Nicholas A.
2012-01-01
The U.S. Department of Energy (DOE) and Lockheed Martin Space Systems Company (LMSSC) have been developing the Advanced Stirling Radioisotope Generator (ASRG) for use as a power system for space science missions. This generator would use two high-efficiency Advanced Stirling Convertors (ASCs), developed by Sunpower Inc. and NASA Glenn Research Center (GRC). The ASCs convert thermal energy from a radioisotope heat source into electricity. As part of ground testing of these ASCs, different operating conditions are used to simulate expected mission conditions. These conditions require achieving a particular operating frequency, hot end and cold end temperatures, and specified electrical power output for a given net heat input. In an effort to improve net heat input predictions, numerous tasks have been performed which provided a more accurate value for net heat input into the ASCs, including testing validation hardware, known as the Thermal Standard, to provide a direct comparison to numerical and empirical models used to predict convertor net heat input. This validation hardware provided a comparison for scrutinizing and improving empirical correlations and numerical models of ASC-E2 net heat input. This hardware simulated the characteristics of an ASC-E2 convertor in both an operating and non-operating mode. This paper describes the Thermal Standard testing and the conclusions of the validation effort applied to the empirical correlation methods used by the Radioisotope Power System (RPS) team at NASA Glenn.
Nutrient Mass Balance for the Mobile River Basin in Alabama, Georgia, and Mississippi
NASA Astrophysics Data System (ADS)
Harned, D. A.; Harvill, J. S.; McMahon, G.
2001-12-01
The source and fate of nutrients in the Mobile River drainage basin are important water-quality concerns in Alabama, Georgia, and Mississippi. Land cover in the basin is 74 percent forested, 16 percent agricultural, 2.5 percent developed, and 4 percent wetland. A nutrient mass balance calculated for 18 watersheds in the Mobile River Basin indicates that agricultural non-point nitrogen and phosphorus sources and urban non-point nitrogen sources are the most important factors associated with nutrients in the streams. Nitrogen and phosphorus inputs from atmospheric deposition, crop fertilizer, biological nitrogen fixation, animal waste, and point sources were estimated for each of the 18 drainage basins. Total basin nitrogen inputs ranged from 27 to 93 percent from atmospheric deposition (56 percent mean), 4 to 45 percent from crop fertilizer (25 percent mean), <0.01 to 31 percent from biological nitrogen fixation (8 percent mean), 2 to 14 percent from animal waste (8 percent mean), and 0.2 to 11 percent from point sources (3 percent mean). Total basin phosphorus inputs ranged from 10 to 39 percent from atmospheric deposition (26 percent mean), 7 to 51 percent from crop fertilizer (28 percent mean), 20 to 64 percent from animal waste (41 percent mean), and 0.2 to 11 percent from point sources (3 percent mean). Nutrient outputs for the watersheds were estimated by calculating instream loads and estimating nutrient uptake, or withdrawal, by crops. The difference between the total basin inputs and outputs represents nutrients that are retained or processed within the basin while moving from the point of use to the stream, or in the stream. Nitrogen output, as a percentage of the total basin nitrogen inputs, ranged from 19 to 79 percent for instream loads (35 percent mean) and from 0.01 to 32 percent for crop harvest (10 percent mean). From 53 to 87 percent (75 percent mean) of nitrogen inputs were retained within the 18 basins. Phosphorus output ranged from 9 to 29 percent for instream loads (18 percent mean) and from 0.01 to 23 percent for crop harvest (7 percent mean). The basins retained from 60 to 87 percent (74 percent mean) of phosphorous inputs. Correlation of basin nutrient output loads and concentrations with the basin inputs and correlation of output loads and concentrations with basin land use were tested using the Spearman rank test. The correlation analysis indicated that higher nitrogen concentrations in the streams are associated with urban areas and higher loads are associated with agriculture; high phosphorus output loads and concentrations are associated with agriculture. Higher nutrient loads in agricultural basins are partly an effect of basin size-- larger basins generate larger nutrient loads. Nutrient loads and concentrations showed no significant correlation to point-source inputs. Nitrogen loads were significantly (p<0.05, correlation coefficient >0.5) higher in basins with greater cropland areas. Nitrogen concentrations also increased as residential, commercial, and total urban areas increased. Phosphorus loads were positively correlated with animal-waste inputs, pasture, and total agricultural land. Phosphorus concentrations were highest in basins with the greatest amounts of row-crop agriculture.
Hu, Jing; Zheng, Yi; Gao, Jianbo
2013-01-01
Understanding the causal relation between neural inputs and movements is very important for the success of brain-machine interfaces (BMIs). In this study, we analyze 104 neurons’ firings using statistical, information theoretic, and fractal analysis. The latter include Fano factor analysis, multifractal adaptive fractal analysis (MF-AFA), and wavelet multifractal analysis. We find neuronal firings are highly non-stationary, and Fano factor analysis always indicates long-range correlations in neuronal firings, irrespective of whether those firings are correlated with movement trajectory or not, and thus does not reveal any actual correlations between neural inputs and movements. On the other hand, MF-AFA and wavelet multifractal analysis clearly indicate that when neuronal firings are not well correlated with movement trajectory, they do not have or only have weak temporal correlations. When neuronal firings are well correlated with movements, they are characterized by very strong temporal correlations, up to a time scale comparable to the average time between two successive reaching tasks. This suggests that neurons well correlated with hand trajectory experienced a “re-setting” effect at the start of each reaching task, in the sense that within the movement correlated neurons the spike trains’ long-range dependences persisted about the length of time the monkey used to switch between task executions. A new task execution re-sets their activity, making them only weakly correlated with their prior activities on longer time scales. We further discuss the significance of the coalition of those important neurons in executing cortical control of prostheses. PMID:24130549
Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F
2018-01-01
The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.
NASA Astrophysics Data System (ADS)
Hadi, Sinan Jasim; Tombul, Mustafa
2018-06-01
Streamflow is an essential component of the hydrologic cycle in the regional and global scale and the main source of fresh water supply. It is highly associated with natural disasters, such as droughts and floods. Therefore, accurate streamflow forecasting is essential. Forecasting streamflow in general and monthly streamflow in particular is a complex process that cannot be handled by data-driven models (DDMs) only and requires pre-processing. Wavelet transformation is a pre-processing technique; however, application of continuous wavelet transformation (CWT) produces many scales that cause deterioration in the performance of any DDM because of the high number of redundant variables. This study proposes multigene genetic programming (MGGP) as a selection tool. After the CWT analysis, it selects important scales to be imposed into the artificial neural network (ANN). A basin located in the southeast of Turkey is selected as case study to prove the forecasting ability of the proposed model. One month ahead downstream flow is used as output, and downstream flow, upstream, rainfall, temperature, and potential evapotranspiration with associated lags are used as inputs. Before modeling, wavelet coherence transformation (WCT) analysis was conducted to analyze the relationship between variables in the time-frequency domain. Several combinations were developed to investigate the effect of the variables on streamflow forecasting. The results indicated a high localized correlation between the streamflow and other variables, especially the upstream. In the models of the standalone layout where the data were entered to ANN and MGGP without CWT, the performance is found poor. In the best-scale layout, where the best scale of the CWT identified as the highest correlated scale is chosen and enters to ANN and MGGP, the performance increased slightly. Using the proposed model, the performance improved dramatically particularly in forecasting the peak values because of the inclusion of several scales in which seasonality and irregularity can be captured. Using hydrological and meteorological variables also improved the ability to forecast the streamflow.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
A stacking ensemble learning framework for annual river ice breakup dates
NASA Astrophysics Data System (ADS)
Sun, Wei; Trevor, Bernard
2018-06-01
River ice breakup dates (BDs) are not merely a proxy indicator of climate variability and change, but a direct concern in the management of local ice-caused flooding. A framework of stacking ensemble learning for annual river ice BDs was developed, which included two-level components: member and combining models. The member models described the relations between BD and their affecting indicators; the combining models linked the predicted BD by each member models with the observed BD. Especially, Bayesian regularization back-propagation artificial neural network (BRANN), and adaptive neuro fuzzy inference systems (ANFIS) were employed as both member and combining models. The candidate combining models also included the simple average methods (SAM). The input variables for member models were selected by a hybrid filter and wrapper method. The performances of these models were examined using the leave-one-out cross validation. As the largest unregulated river in Alberta, Canada with ice jams frequently occurring in the vicinity of Fort McMurray, the Athabasca River at Fort McMurray was selected as the study area. The breakup dates and candidate affecting indicators in 1980-2015 were collected. The results showed that, the BRANN member models generally outperformed the ANFIS member models in terms of better performances and simpler structures. The difference between the R and MI rankings of inputs in the optimal member models may imply that the linear correlation based filter method would be feasible to generate a range of candidate inputs for further screening through other wrapper or embedded IVS methods. The SAM and BRANN combining models generally outperformed all member models. The optimal SAM combining model combined two BRANN member models and improved upon them in terms of average squared errors by 14.6% and 18.1% respectively. In this study, for the first time, the stacking ensemble learning was applied to forecasting of river ice breakup dates, which appeared promising for other river ice forecasting problems.
Gandolfi, I; Bertolini, V; Bestetti, G; Ambrosini, R; Innocente, E; Rampazzo, G; Papacchini, M; Franzetti, A
2015-06-01
The study of spatio-temporal variability of airborne bacterial communities has recently gained importance due to the evidence that airborne bacteria are involved in atmospheric processes and can affect human health. In this work, we described the structure of airborne microbial communities in two urban areas (Milan and Venice, Northern Italy) through the sequencing, by the Illumina platform, of libraries containing the V5-V6 hypervariable regions of the 16S rRNA gene and estimated the abundance of airborne bacteria with quantitative PCR (qPCR). Airborne microbial communities were dominated by few taxa, particularly Burkholderiales and Actinomycetales, more abundant in colder seasons, and Chloroplasts, more abundant in warmer seasons. By partitioning the variation in bacterial community structure, we could assess that environmental and meteorological conditions, including variability between cities and seasons, were the major determinants of the observed variation in bacterial community structure, while chemical composition of atmospheric particulate matter (PM) had a minor contribution. Particularly, Ba, SO4 (2-) and Mg(2+) concentrations were significantly correlated with microbial community structure, but it was not possible to assess whether they simply co-varied with seasonal shifts of bacterial inputs to the atmosphere, or their variation favoured specific taxa. Both local sources of bacteria and atmospheric dispersal were involved in the assembling of airborne microbial communities, as suggested, to the one side by the large abundance of bacteria typical of lagoon environments (Rhodobacterales) observed in spring air samples from Venice and to the other by the significant effect of wind speed in shaping airborne bacterial communities at all sites.
Ramaglia, Luca; Toti, Paolo; Sbordone, Carolina; Guidetti, Franco; Martuscelli, Ranieri; Sbordone, Ludovico
2015-05-01
The purpose of this study was to determine the existence of correlations between marginal peri-implant linear bone loss and the angulation of implants in maxillary and mandibular augmented areas over the course of a 2-year survey. Dependent variables described the sample of the present retrospective chart review. By using three-dimensional radiographs, input variables, describing the implant angulation (buccal-lingual angle [φ] and mesial-distal angle [θ]) were measured; outcome variables described survival rate and marginal bone resorption (MBR) around dental implants in autogenous grafts (10 maxillae and 14 mandibles). Pairwise comparisons and linear correlation coefficient were computed. The peri-implant MBR in maxillary buccal and palatal areas appeared less intensive in the presence of an increased angulation of an implant towards the palatal side. Minor MBR was recorded around mandibular dental implants positioned at a right angle and slightly angulated towards the mesial. Resorption in buccal areas may be less intensive as the angulation of placed implants increases towards the palatal area in the maxilla, whereas for the mandible, a greater inclination towards the lingual area could be negative. In the mandibular group, when the implant was slightly angulated in the direction of the distal area, bone resorption seemed to be more marked in the buccal area. In the planning of dental implant placement in reconstructed alveolar bone with autograft, the extremely unfavourable resorption at the buccal aspect should be considered; this marginal bone loss seemed to be very sensitive to the angulation of the dental implant.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-01-01
ABSTRACT OBJECTIVE Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. METHODS This is a computational model using fuzzy logic based on Mamdani’s inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. RESULTS In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. CONCLUSIONS Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. PMID:28658366
Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat
2015-01-01
The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Creating a non-linear total sediment load formula using polynomial best subset regression model
NASA Astrophysics Data System (ADS)
Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali
2016-08-01
The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.
Calibrating multiple isotopic proxies in a modern aragonite speleothem from northeast India
NASA Astrophysics Data System (ADS)
Ronay, E.; Oster, J. L.; Sharp, W. D.; Marks, N.; Erhardt, A.; Breitenbach, S. F. M.
2017-12-01
Uranium, strontium, and calcium isotope ratios in calcite speleothems are used as proxies for water-soil-rock interactions and prior calcite precipitation, and thus provide information about effective rainfall amount variations, primarily in semi-arid or highly seasonal regions. However, less is known about how these proxies function in humid regions and in aragonite speleothems. In this study, we use meteorological data to calibrate (234U/238U)i and 87Sr/86Sr in a modern aragonite speleothem from northeast India, the rainiest place on Earth, to determine how these proxies reflect effective monsoon rainfall amount. MAW-0201 is an annually laminated aragonite stalagmite that grew from 1960-2013 in Mawmluh Cave, Meghalaya, India. Rainfall here is extremely seasonal due to the Indian Summer Monsoon (ISM), which brings several meters of rain to the region each summer, but with inter-annual variability in total rainfall. The δ18O in Mawmluh dripwater and speleothems reflects moisture source and transport, rather than rainfall amount. Variations in Mg, U, and Ba concentrations in MAW-0201 show seasonal and multi-annual variability. U and Mg are closely correlated, but multi-year periods show significant anti-correlation. The Mg and U distribution coefficients in calcite and aragonite indicate correlated periods are times of prior calcite precipitation (PCP) and anti-correlated periods are times of prior aragonite precipitation (PAP) in the epikarst. We use δ44/40Ca to test this hypothesis, as Ca isotopes fractionate differently during calcite and aragonite precipitation and speleothem δ44/40Ca will record unique PAP and PCP fingerprints. We propose such shifts from PCP to PAP reflect hydrologic variability and/or flow path changes, which provide a useful tool for understanding epikarst hydrology but may also be a complicating factor in speleothem-based paleoclimate interpretations. Preliminary (234U/238U)i (always <1) and 87Sr/86Sr spanning 1991-2009 each show significant variability outside of analytical error. (234U/238U)i displays a decadal trend, gradually increasing until 2000 and decreasing to the end of the record. Several years in the 87Sr/86Sr record have anomalously high values, which may reflect increased sea spray input and provide unique information on the wind component of the ISM.
NASA Astrophysics Data System (ADS)
Hsu, S.-C.; Gong, G.-C.; Shiah, F.-K.; Hung, C.-C.; Kao, S.-J.; Zhang, R.; Chen, W.-N.; Chen, C.-C.; Chou, C. C.-K.; Lin, Y.-C.; Lin, F.-J.; Lin, S.-H.
2014-08-01
Iron and phosphorous are essential to marine microorganisms in vast regions in oceans worldwide. Atmospheric inputs are important allochthonous sources of Fe and P. The variability in airborne Fe deposition is hypothesized to serve an important function in previous glacial-interglacial cycles, contributing to the variability in atmospheric CO2 and ultimately the climate. Understanding the mechanisms underlying the mobilization of airborne Fe and P from insoluble to soluble forms is critical to evaluate the biogeochemical effects of these elements. In this study, we present a robust power-law correlation between fractional Fe solubility and non-sea-salt-sulfate / Total-Fe (nss-sulfate / FeT) molar ratio independent of distinct sources of airborne Fe of natural and/or anthropogenic origins over the South China Sea. This area receives Asian dust and pollution outflows and Southeast Asian biomass burning. This correlation is also valid for nitrate and total acids, demonstrating the significance of acid processing in enhancing Fe mobilization. Such correlations are also found for P, yet source dependent. These relationships serve as straightforward parameters that can be directly incorporated into available atmosphere-ocean coupling models that facilitate the assessment of Fe and P fertilization effects. Although biomass burning activity may supply Fe to the bioavailable Fe pool, pyrogenic soils are possibly the main contributors, not the burned plants. This finding warrants a multidisciplinary investigation that integrates atmospheric observations with the resulting biogeochemistry in the South China Sea, which is influenced by atmospheric forcings and nutrient dynamics with monsoons.
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Response to a periodic stimulus in a perfect integrate-and-fire neuron model driven by colored noise
NASA Astrophysics Data System (ADS)
Mankin, Romi; Rekker, Astrid
2016-12-01
The output interspike interval statistics of a stochastic perfect integrate-and-fire neuron model driven by an additive exogenous periodic stimulus is considered. The effect of temporally correlated random activity of synaptic inputs is modeled by an additive symmetric dichotomous noise. Using a first-passage-time formulation, exact expressions for the output interspike interval density and for the serial correlation coefficient are derived in the nonsteady regime, and their dependence on input parameters (e.g., the noise correlation time and amplitude as well as the frequency of an input current) is analyzed. It is shown that an interplay of a periodic forcing and colored noise can cause a variety of nonequilibrium cooperation effects, such as sign reversals of the interspike interval correlations versus noise-switching rate as well as versus the frequency of periodic forcing, a power-law-like decay of oscillations of the serial correlation coefficients in the long-lag limit, amplification of the output signal modulation in the instantaneous firing rate of the neural response, etc. The features of spike statistics in the limits of slow and fast noises are also discussed.
Effects of control inputs on the estimation of stability and control parameters of a light airplane
NASA Technical Reports Server (NTRS)
Cannaday, R. L.; Suit, W. T.
1977-01-01
The maximum likelihood parameter estimation technique was used to determine the values of stability and control derivatives from flight test data for a low-wing, single-engine, light airplane. Several input forms were used during the tests to investigate the consistency of parameter estimates as it relates to inputs. These consistencies were compared by using the ensemble variance and estimated Cramer-Rao lower bound. In addition, the relationship between inputs and parameter correlations was investigated. Results from the stabilator inputs are inconclusive but the sequence of rudder input followed by aileron input or aileron followed by rudder gave more consistent estimates than did rudder or ailerons individually. Also, square-wave inputs appeared to provide slightly improved consistency in the parameter estimates when compared to sine-wave inputs.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling.
Perdikaris, P; Raissi, M; Damianou, A; Lawrence, N D; Karniadakis, G E
2017-02-01
Multi-fidelity modelling enables accurate inference of quantities of interest by synergistically combining realizations of low-cost/low-fidelity models with a small set of high-fidelity observations. This is particularly effective when the low- and high-fidelity models exhibit strong correlations, and can lead to significant computational gains over approaches that solely rely on high-fidelity models. However, in many cases of practical interest, low-fidelity models can only be well correlated to their high-fidelity counterparts for a specific range of input parameters, and potentially return wrong trends and erroneous predictions if probed outside of their validity regime. Here we put forth a probabilistic framework based on Gaussian process regression and nonlinear autoregressive schemes that is capable of learning complex nonlinear and space-dependent cross-correlations between models of variable fidelity, and can effectively safeguard against low-fidelity models that provide wrong trends. This introduces a new class of multi-fidelity information fusion algorithms that provide a fundamental extension to the existing linear autoregressive methodologies, while still maintaining the same algorithmic complexity and overall computational cost. The performance of the proposed methods is tested in several benchmark problems involving both synthetic and real multi-fidelity datasets from computational fluid dynamics simulations.
NASA Astrophysics Data System (ADS)
Bahreini, Maryam; Hosseinimakarem, Zahra; Hassan Tavassoli, Seyed
2012-09-01
Laser induced breakdown spectroscopy (LIBS) is used to investigate the possible effect of osteoporosis on the elemental composition of fingernails. Also, the ability to classify healthy, osteopenic, and osteoporotic subjects based on their fingernail spectra has been examined. 46 atomic and ionic emission lines belonging to 13 elements, which are dominated by calcium and magnesium, have been identified. Measurements are carried out on fingernail clippings of 99 subjects including 27 healthy, 47 osteopenic, and 25 osteoporotic subjects. The Pearson correlations between spectral intensities of different elements of fingernail and age and bone mineral densities (BMDs) in nail samples are calculated. Correlations between line intensities of some elements such as sodium and potassium, calcium and iron, magnesium and silicon and also between some fingernail elements, BMD, and age are observed. Although some of these correlations are weak, some information about mineral metabolism can be deduced from them. Discrimination between nail samples of healthy, osteopenic, and osteoporotic subjects is shown to be somehow possible by a discriminant function analysis using 46 atomic emission lines of the LIBS spectra as input variables. The results of this study provide some evidences for association between osteoporosis and elemental composition of fingernails measured by LIBS.
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi
2017-08-01
Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.
Stability of radiomic features in CT perfusion maps
NASA Astrophysics Data System (ADS)
Bogowicz, M.; Riesterer, O.; Bundschuh, R. A.; Veit-Haibach, P.; Hüllner, M.; Studer, G.; Stieb, S.; Glatz, S.; Pruschy, M.; Guckenberger, M.; Tanadini-Lang, S.
2016-12-01
This study aimed to identify a set of stable radiomic parameters in CT perfusion (CTP) maps with respect to CTP calculation factors and image discretization, as an input for future prognostic models for local tumor response to chemo-radiotherapy. Pre-treatment CTP images of eleven patients with oropharyngeal carcinoma and eleven patients with non-small cell lung cancer (NSCLC) were analyzed. 315 radiomic parameters were studied per perfusion map (blood volume, blood flow and mean transit time). Radiomics robustness was investigated regarding the potentially standardizable (image discretization method, Hounsfield unit (HU) threshold, voxel size and temporal resolution) and non-standardizable (artery contouring and noise threshold) perfusion calculation factors using the intraclass correlation (ICC). To gain added value for our model radiomic parameters correlated with tumor volume, a well-known predictive factor for local tumor response to chemo-radiotherapy, were excluded from the analysis. The remaining stable radiomic parameters were grouped according to inter-parameter Spearman correlations and for each group the parameter with the highest ICC was included in the final set. The acceptance level was 0.9 and 0.7 for the ICC and correlation, respectively. The image discretization method using fixed number of bins or fixed intervals gave a similar number of stable radiomic parameters (around 40%). The potentially standardizable factors introduced more variability into radiomic parameters than the non-standardizable ones with 56-98% and 43-58% instability rates, respectively. The highest variability was observed for voxel size (instability rate >97% for both patient cohorts). Without standardization of CTP calculation factors none of the studied radiomic parameters were stable. After standardization with respect to non-standardizable factors ten radiomic parameters were stable for both patient cohorts after correction for inter-parameter correlations. Voxel size, image discretization, HU threshold and temporal resolution have to be standardized to build a reliable predictive model based on CTP radiomics analysis.
NASA Astrophysics Data System (ADS)
Schichtel, Bret A.; Barna, Michael G.; Gebhart, Kristi A.; Malm, William C.
The Big Bend Regional Aerosol and Visibility Observational (BRAVO) study was designed to determine the sources of haze at Big Bend National Park, Texas, using a combination of source and receptor models. BRAVO included an intensive monitoring campaign from July to October 1999 that included the release of perfluorocarbon tracers from four locations at distances 230-750 km from Big Bend and measured at 24 sites. The tracer measurements near Big Bend were used to evaluate the dispersion mechanisms in the REMSAD Eulerian model and the CAPITA Monte Carlo (CMC) Lagrangian model used in BRAVO. Both models used 36 km MM5 wind fields as input. The CMC model also used a combination of routinely available 80 and 190 km wind fields from the National Weather Service's National Centers for Environmental Prediction (NCEP) as input. A model's performance is limited by inherent uncertainties due to errors in the tracer concentrations and a model's inability to simulate sub-resolution variability. A range in the inherent uncertainty was estimated by comparing tracer data at nearby monitoring sites. It was found that the REMSAD and CMC models, using the MM5 wind field, produced performance statistics generally within this inherent uncertainty. The CMC simulation using the NCEP wind fields could reproduce the timing of tracer impacts at Big Bend, but not the concentration values, due to a systematic underestimation. It appears that the underestimation was partly due to excessive vertical dilution from high mixing depths. The model simulations were more sensitive to the input wind fields than the models' different dispersion mechanisms. Comparisons of REMSAD to CMC tracer simulations using the MM5 wind fields had correlations between 0.75 and 0.82, depending on the tracer, but the tracer simulations using the two wind fields in the CMC model had correlations between 0.37 and 0.5.
How model and input uncertainty impact maize yield simulations in West Africa
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli
2015-02-01
Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Frequency spectrum analyzer with phase-lock
Boland, Thomas J.
1984-01-01
A frequency-spectrum analyzer with phase-lock for analyzing the frequency and amplitude of an input signal is comprised of a voltage controlled oscillator (VCO) which is driven by a ramp generator, and a phase error detector circuit. The phase error detector circuit measures the difference in phase between the VCO and the input signal, and drives the VCO locking it in phase momentarily with the input signal. The input signal and the output of the VCO are fed into a correlator which transfers the input signal to a frequency domain, while providing an accurate absolute amplitude measurement of each frequency component of the input signal.
Sympathovagal imbalance in hyperthyroidism.
Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H
2001-07-01
We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.
Joint transform correlators with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bykovsky, Yuri A.; Karpiouk, Andrey B.; Markilov, Anatoly A.; Rodin, Vladislav G.; Starikov, Sergey N.
1997-03-01
Two variants of joint transform correlators with monochromatic spatially incoherent illumination are considered. The Fourier-holograms of the reference and recognized images are recorded simultaneously or apart in a time on the same spatial light modulator directly by monochromatic spatially incoherent light. To create the signal of mutual correlation of the images it is necessary to execute nonlinear transformation when the hologram is illuminated by coherent light. In the first scheme of the correlator this aim was achieved by using double pas of a restoring coherent wave through the hologram. In the second variant of the correlator the non-linearity of the characteristic of the spatial light modulator for hologram recording was used. Experimental schemes and results on processing teste images by both variants of joint transform correlators with monochromatic spatially incoherent illumination. The use of spatially incoherent light on the input of joint transform correlators permits to reduce the requirements to optical quality of elements, to reduce accuracy requirements on elements positioning and to expand a number of devices suitable to input images in correlators.
Sojoudi, Alireza; Goodyear, Bradley G
2016-12-01
Spontaneous fluctuations of blood-oxygenation level-dependent functional magnetic resonance imaging (BOLD fMRI) signals are highly synchronous between brain regions that serve similar functions. This provides a means to investigate functional networks; however, most analysis techniques assume functional connections are constant over time. This may be problematic in the case of neurological disease, where functional connections may be highly variable. Recently, several methods have been proposed to determine moment-to-moment changes in the strength of functional connections over an imaging session (so called dynamic connectivity). Here a novel analysis framework based on a hierarchical observation modeling approach was proposed, to permit statistical inference of the presence of dynamic connectivity. A two-level linear model composed of overlapping sliding windows of fMRI signals, incorporating the fact that overlapping windows are not independent was described. To test this approach, datasets were synthesized whereby functional connectivity was either constant (significant or insignificant) or modulated by an external input. The method successfully determines the statistical significance of a functional connection in phase with the modulation, and it exhibits greater sensitivity and specificity in detecting regions with variable connectivity, when compared with sliding-window correlation analysis. For real data, this technique possesses greater reproducibility and provides a more discriminative estimate of dynamic connectivity than sliding-window correlation analysis. Hum Brain Mapp 37:4566-4580, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Johansen, Kasper; Grove, James; Denham, Robert; Phinn, Stuart
2013-01-01
Stream bank condition is an important physical form indicator for streams related to the environmental condition of riparian corridors. This research developed and applied an approach for mapping bank condition from airborne light detection and ranging (LiDAR) and high-spatial resolution optical image data in a temperate forest/woodland/urban environment. Field observations of bank condition were related to LiDAR and optical image-derived variables, including bank slope, plant projective cover, bank-full width, valley confinement, bank height, bank top crenulation, and ground vegetation cover. Image-based variables, showing correlation with the field measurements of stream bank condition, were used as input to a cumulative logistic regression model to estimate and map bank condition. The highest correlation was achieved between field-assessed bank condition and image-derived average bank slope (R2=0.60, n=41), ground vegetation cover (R=0.43, n=41), bank width/height ratio (R=0.41, n=41), and valley confinement (producer's accuracy=100%, n=9). Cross-validation showed an average misclassification error of 0.95 from an ordinal scale from 0 to 4 using the developed model. This approach was developed to support the remotely sensed mapping of stream bank condition for 26,000 km of streams in Victoria, Australia, from 2010 to 2012.
NASA Astrophysics Data System (ADS)
De Caires, Sunshine A.; Wuddivira, Mark N.; Bekele, Isaac
2014-10-01
Cocoa remains in the same field for decades, resulting in plantations dominated with aging trees growing on variable and depleted soils. We determined the spatio-temporal variability of key soil properties in a (5.81 ha) field from the International Cocoa Genebank, Trinidad using geophysical methods. Multi-year (2008-2009) measurements of apparent electrical conductivity at 0-0.75 m (shallow) and 0.75-1.5 m (deep) were conducted. Apparent electrical conductivity at deep and shallow gave the strongest linear correlation with clay-silt content (R = 0.67 and R = 0.78, respectively) and soil solution electrical conductivity (R = 0.76 and R = 0.60, respectively). Spearman rank correlation coefficients ranged between 0.89-0.97 and 0.81- 0.95 for apparent electrical conductivity at deep and shallow, respectively, signifying a strong linear dependence between measurement days. Thus, in the humid tropics, cocoa fields with thick organic litter layer and relatively dense understory cover, experience minimal fluctuations in transient properties of soil water and temperature at the topsoil resulting in similarly stable apparent electrical conductivity at shallow and deep. Therefore, apparent electrical conductivity at shallow, which covers the depth where cocoa feeder roots concentrate, can be used as a fertility indicator and to develop soil zones for efficient application of inputs and management of cocoa fields.
Multivariate methods for indoor PM10 and PM2.5 modelling in naturally ventilated schools buildings
NASA Astrophysics Data System (ADS)
Elbayoumi, Maher; Ramli, Nor Azam; Md Yusof, Noor Faizah Fitri; Yahaya, Ahmad Shukri Bin; Al Madhoun, Wesam; Ul-Saufie, Ahmed Zia
2014-09-01
In this study the concentrations of PM10, PM2.5, CO and CO2 concentrations and meteorological variables (wind speed, air temperature, and relative humidity) were employed to predict the annual and seasonal indoor concentration of PM10 and PM2.5 using multivariate statistical methods. The data have been collected in twelve naturally ventilated schools in Gaza Strip (Palestine) from October 2011 to May 2012 (academic year). The bivariate correlation analysis showed that the indoor PM10 and PM2.5 were highly positive correlated with outdoor concentration of PM10 and PM2.5. Further, Multiple linear regression (MLR) was used for modelling and R2 values for indoor PM10 were determined as 0.62 and 0.84 for PM10 and PM2.5 respectively. The Performance indicators of MLR models indicated that the prediction for PM10 and PM2.5 annual models were better than seasonal models. In order to reduce the number of input variables, principal component analysis (PCA) and principal component regression (PCR) were applied by using annual data. The predicted R2 were 0.40 and 0.73 for PM10 and PM2.5, respectively. PM10 models (MLR and PCR) show the tendency to underestimate indoor PM10 concentrations as it does not take into account the occupant's activities which highly affect the indoor concentrations during the class hours.
Quantum Correlations in Nonlocal Boson Sampling.
Shahandeh, Farid; Lund, Austin P; Ralph, Timothy C
2017-09-22
Determination of the quantum nature of correlations between two spatially separated systems plays a crucial role in quantum information science. Of particular interest is the questions of if and how these correlations enable quantum information protocols to be more powerful. Here, we report on a distributed quantum computation protocol in which the input and output quantum states are considered to be classically correlated in quantum informatics. Nevertheless, we show that the correlations between the outcomes of the measurements on the output state cannot be efficiently simulated using classical algorithms. Crucially, at the same time, local measurement outcomes can be efficiently simulated on classical computers. We show that the only known classicality criterion violated by the input and output states in our protocol is the one used in quantum optics, namely, phase-space nonclassicality. As a result, we argue that the global phase-space nonclassicality inherent within the output state of our protocol represents true quantum correlations.
ERIC Educational Resources Information Center
Meakins, Felicity; Wigglesworth, Gillian
2013-01-01
In situations of language endangerment, the ability to understand a language tends to persevere longer than the ability to speak it. As a result, the possibility of language revival remains high even when few speakers remain. Nonetheless, this potential requires that those with high levels of comprehension received sufficient input as children for…
Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration
USDA-ARS?s Scientific Manuscript database
Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Cognitive Agility Measurement in a Complex Environment
2017-04-01
correlate with their corresponding historical psychology tests? EEA3.1: Does the variable for Make Goal cognitive flexibility correlate with the...Stroop Test cognitive flexibility variable? EEA3.2: Does the variable for Make Goal cognitive openness correlate with the AUT cognitive openness...variable? EEA3.3: Does the variable for Make Goal focused attention correlate with the Go, No Go Paradigm focused attention variable? 1.6
Zippo, Antonio G.; Biella, Gabriele E. M.
2015-01-01
Current developments in neuronal physiology are unveiling novel roles for dendrites. Experiments have shown mechanisms of non-linear synaptic NMDA dependent activations, able to discriminate input patterns through the waveforms of the excitatory postsynaptic potentials. Contextually, the synaptic clustering of inputs is the principal cellular strategy to separate groups of common correlated inputs. Dendritic branches appear to work as independent discriminating units of inputs potentially reflecting an extraordinary repertoire of pattern memories. However, it is unclear how these observations could impact our comprehension of the structural correlates of memory at the cellular level. This work investigates the discrimination capabilities of neurons through computational biophysical models to extract a predicting law for the dendritic input discrimination capability (M). By this rule we compared neurons from a neuron reconstruction repository (neuromorpho.org). Comparisons showed that primate neurons were not supported by an equivalent M preeminence and that M is not uniformly distributed among neuron types. Remarkably, neocortical neurons had substantially less memory capacity in comparison to those from non-cortical regions. In conclusion, the proposed rule predicts the inherent neuronal spatial memory gathering potentially relevant anatomical and evolutionary considerations about the brain cytoarchitecture. PMID:26100354
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
The input and output management of solid waste using DEA models: A case study at Jengka, Pahang
NASA Astrophysics Data System (ADS)
Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah
2017-08-01
Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Tóth, Miklós; Doorduin, Janine; Häggkvist, Jenny; Varrone, Andrea; Amini, Nahid; Halldin, Christer; Gulyás, Balázs
2015-01-01
Molecular imaging of the 18 kD Translocator protein (TSPO) with positron emission tomography (PET) is of great value for studying neuroinflammation in rodents longitudinally. Quantification of the TSPO in rodents is, however, quite challenging. There is no suitable reference region and the use of plasma-derived input is not an option for longitudinal studies. The aim of this study was therefore to evaluate the use of the standardized uptake value (SUV) as an outcome measure for TSPO imaging in rodent brain PET studies, using [11C]PBR28. In the first part of the study, healthy male Wistar rats (n = 4) were used to determine the correlation between the distribution volume (VT, calculated with Logan graphical analysis) and the SUV. In the second part, healthy male Wistar rats (n = 4) and healthy male C57BL/6J mice (n = 4), were used to determine the test-retest variability of the SUV, with a 7-day interval between measurements. Dynamic PET scans of 63 minutes were acquired with a nanoScan PET/MRI and nanoScan PET/CT. An MRI scan was made for anatomical reference with each measurement. The whole brain VT of [11C]PBR28 in rats was 42.9 ± 1.7. A statistically significant correlation (r2 = 0.96; p < 0.01) was found between the VT and the SUV. The test-retest variability in 8 brain region ranged from 8 to 20% in rats and from 7 to 23% in mice. The interclass correlation coefficient (ICC) was acceptable to excellent for rats, but poor to acceptable for mice. The SUV of [11C]PBR28 showed a high correlation with VT as well as good test-retest variability. For future longitudinal small animal PET studies the SUV can thus be used to describe [11C]PBR28 uptake in healthy brain tissue. Based on the present observations, further studies are needed to explore the applicability of this approach in small animal disease models, with special regard to neuroinflammatory models.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
NASA Astrophysics Data System (ADS)
Borovsky, Joseph E.
2017-12-01
Time-integral correlations are examined between the geosynchronous relativistic electron flux index Fe1.2 and 31 variables of the solar wind and magnetosphere. An "evolutionary algorithm" is used to maximize correlations. Time integrations (into the past) of the variables are found to be superior to time-lagged variables for maximizing correlations with the radiation belt. Physical arguments are given as to why. Dominant correlations are found for the substorm-injected electron flux at geosynchronous orbit and for the pressure of the ion plasma sheet. Different sets of variables are constructed and correlated with Fe1.2: some sets maximize the correlations, and some sets are based on purely solar wind variables. Examining known physical mechanisms that act on the radiation belt, sets of correlations are constructed (1) using magnetospheric variables that control those physical mechanisms and (2) using the solar wind variables that control those magnetospheric variables. Fe1.2-increasing intervals are correlated separately from Fe1.2-decreasing intervals, and the introduction of autoregression into the time-integral correlations is explored. A great impediment to discerning physical cause and effect from the correlations is the fact that all solar wind variables are intercorrelated and carry much of the same information about the time sequence of the solar wind that drives the time sequence of the magnetosphere.
Nelson, Sarah J.; Webster, Katherine E.; Loftin, Cynthia S.; Weathers, Kathleen C.
2013-01-01
Major ion and mercury (Hg) inputs to terrestrial ecosystems include both wet and dry deposition (total deposition). Estimating total deposition to sensitive receptor sites is hampered by limited information regarding its spatial heterogeneity and seasonality. We used measurements of throughfall flux, which includes atmospheric inputs to forests and the net effects of canopy leaching or uptake, for ten major ions and Hg collected during 35 time periods in 1999–2005 at over 70 sites within Acadia National Park, Maine to (1) quantify coherence in temporal dynamics of seasonal throughfall deposition and (2) examine controls on these patterns at multiple scales. We quantified temporal coherence as the correlation between all possible site pairs for each solute on a seasonal basis. In the summer growing season and autumn, coherence among pairs of sites with similar vegetation was stronger than for site-pairs that differed in vegetation suggesting that interaction with the canopy and leaching of solutes differed in coniferous, deciduous, mixed, and shrub or open canopy sites. The spatial pattern in throughfall hydrologic inputs across Acadia National Park was more variable during the winter snow season, suggesting that snow re-distribution affects net hydrologic input, which consequently affects chemical flux. Sea-salt corrected calcium concentrations identified a shift in air mass sources from maritime in winter to the continental industrial corridor in summer. Our results suggest that the spatial pattern of throughfall hydrologic flux, dominant seasonal air mass source, and relationship with vegetation in winter differ from the spatial pattern of throughfall flux in these solutes in summer and autumn. The coherence approach applied here made clear the strong influence of spatial heterogeneity in throughfall hydrologic inputs and a maritime air mass source on winter patterns of throughfall flux. By contrast, vegetation type was the most important influence on throughfall chemical flux in summer and autumn.
NASA Astrophysics Data System (ADS)
Xiong, Guoming; Cumming, Paul; Todica, Andrei; Hacker, Marcus; Bartenstein, Peter; Böning, Guido
2012-12-01
Absolute quantitation of the cerebral metabolic rate for glucose (CMRglc) can be obtained in positron emission tomography (PET) studies when serial measurements of the arterial [18F]-fluoro-deoxyglucose (FDG) input are available. Since this is not always practical in PET studies of rodents, there has been considerable interest in defining an image-derived input function (IDIF) by placing a volume of interest (VOI) within the left ventricle of the heart. However, spill-in arising from trapping of FDG in the myocardium often leads to progressive contamination of the IDIF, which propagates to underestimation of the magnitude of CMRglc. We therefore developed a novel, non-invasive method for correcting the IDIF without scaling to a blood sample. To this end, we first obtained serial arterial samples and dynamic FDG-PET data of the head and heart in a group of eight anaesthetized rats. We fitted a bi-exponential function to the serial measurements of the IDIF, and then used the linear graphical Gjedde-Patlak method to describe the accumulation in myocardium. We next estimated the magnitude of myocardial spill-in reaching the left ventricle VOI by assuming a Gaussian point-spread function, and corrected the measured IDIF for this estimated spill-in. Finally, we calculated parametric maps of CMRglc using the corrected IDIF, and for the sake of comparison, relative to serial blood sampling from the femoral artery. The uncorrected IDIF resulted in 20% underestimation of the magnitude of CMRglc relative to the gold standard arterial input method. However, there was no bias with the corrected IDIF, which was robust to the variable extent of myocardial tracer uptake, such that there was a very high correlation between individual CMRglc measurements using the corrected IDIF with gold-standard arterial input results. Based on simulation, we furthermore find that electrocardiogram-gating, i.e. ECG-gating is not necessary for IDIF quantitation using our approach.
Simulating maize yield and biomass with spatial variability of soil field capacity
USDA-ARS?s Scientific Manuscript database
Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...
NASA Astrophysics Data System (ADS)
Dyar, M. D.; Carmosino, M. L.; Breves, E. A.; Ozanne, M. V.; Clegg, S. M.; Wiens, R. C.
2012-04-01
A remote laser-induced breakdown spectrometer (LIBS) designed to simulate the ChemCam instrument on the Mars Science Laboratory Rover Curiosity was used to probe 100 geologic samples at a 9-m standoff distance. ChemCam consists of an integrated remote LIBS instrument that will probe samples up to 7 m from the mast of the rover and a remote micro-imager (RMI) that will record context images. The elemental compositions of 100 igneous and highly-metamorphosed rocks are determined with LIBS using three variations of multivariate analysis, with a goal of improving the analytical accuracy. Two forms of partial least squares (PLS) regression are employed with finely-tuned parameters: PLS-1 regresses a single response variable (elemental concentration) against the observation variables (spectra, or intensity at each of 6144 spectrometer channels), while PLS-2 simultaneously regresses multiple response variables (concentrations of the ten major elements in rocks) against the observation predictor variables, taking advantage of natural correlations between elements. Those results are contrasted with those from the multivariate regression technique of the least absolute shrinkage and selection operator (lasso), which is a penalized shrunken regression method that selects the specific channels for each element that explain the most variance in the concentration of that element. To make this comparison, we use results of cross-validation and of held-out testing, and employ unscaled and uncentered spectral intensity data because all of the input variables are already in the same units. Results demonstrate that the lasso, PLS-1, and PLS-2 all yield comparable results in terms of accuracy for this dataset. However, the interpretability of these methods differs greatly in terms of fundamental understanding of LIBS emissions. PLS techniques generate principal components, linear combinations of intensities at any number of spectrometer channels, which explain as much variance in the response variables as possible while avoiding multicollinearity between principal components. When the selected number of principal components is projected back into the original feature space of the spectra, 6144 correlation coefficients are generated, a small fraction of which are mathematically significant to the regression. In contrast, the lasso models require only a small number (< 24) of non-zero correlation coefficients (β values) to determine the concentration of each of the ten major elements. Causality between the positively-correlated emission lines chosen by the lasso and the elemental concentration was examined. In general, the higher the lasso coefficient (β), the greater the likelihood that the selected line results from an emission of that element. Emission lines with negative β values should arise from elements that are anti-correlated with the element being predicted. For elements except Fe, Al, Ti, and P, the lasso-selected wavelength with the highest β value corresponds to the element being predicted, e.g. 559.8 nm for neutral Ca. However, the specific lines chosen by the lasso with positive β values are not always those from the element being predicted. Other wavelengths and the elements that most strongly correlate with them to predict concentration are obviously related to known geochemical correlations or close overlap of emission lines, while others must result from matrix effects. Use of the lasso technique thus directly informs our understanding of the underlying physical processes that give rise to LIBS emissions by determining which lines can best represent concentration, and which lines from other elements are causing matrix effects.
Prediction of surface distress using neural networks
NASA Astrophysics Data System (ADS)
Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo
2017-06-01
Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).
Diagnosis of periodontal diseases using different classification algorithms: a preliminary study.
Ozden, F O; Özgönenel, O; Özden, B; Aydogdu, A
2015-01-01
The purpose of the proposed study was to develop an identification unit for classifying periodontal diseases using support vector machine (SVM), decision tree (DT), and artificial neural networks (ANNs). A total of 150 patients was divided into two groups such as training (100) and testing (50). The codes created for risk factors, periodontal data, and radiographically bone loss were formed as a matrix structure and regarded as inputs for the classification unit. A total of six periodontal conditions was the outputs of the classification unit. The accuracy of the suggested methods was compared according to their resolution and working time. DT and SVM were best to classify the periodontal diseases with a high accuracy according to the clinical research based on 150 patients. The performances of SVM and DT were found 98% with total computational time of 19.91 and 7.00 s, respectively. ANN had the worst correlation between input and output variable, and its performance was calculated as 46%. SVM and DT appeared to be sufficiently complex to reflect all the factors associated with the periodontal status, simple enough to be understandable and practical as a decision-making aid for prediction of periodontal disease.
Szaleniec, Maciej
2012-01-01
Artificial Neural Networks (ANNs) are introduced as robust and versatile tools in quantitative structure-activity relationship (QSAR) modeling. Their application to the modeling of enzyme reactivity is discussed, along with methodological issues. Methods of input variable selection, optimization of network internal structure, data set division and model validation are discussed. The application of ANNs in the modeling of enzyme activity over the last 20 years is briefly recounted. The discussed methodology is exemplified by the case of ethylbenzene dehydrogenase (EBDH). Intelligent Problem Solver and genetic algorithms are applied for input vector selection, whereas k-means clustering is used to partition the data into training and test cases. The obtained models exhibit high correlation between the predicted and experimental values (R(2) > 0.9). Sensitivity analyses and study of the response curves are used as tools for the physicochemical interpretation of the models in terms of the EBDH reaction mechanism. Neural networks are shown to be a versatile tool for the construction of robust QSAR models that can be applied to a range of aspects important in drug design and the prediction of biological activity.
Interannual Variability in Intercontinental Transport
NASA Technical Reports Server (NTRS)
Gupta, Mohan; Douglass, Anne; Kawa, S. Randy; Pawson, Steven
2003-01-01
We have investigated the importance of intercontinental transport using source-receptor relationship. A global radon-like and seven regional tracers were used in three-dimensional model simulations to quantify their contributions to column burdens and vertical profiles at world-wide receptors. Sensitivity of these contributions to meteorological input was examined using different years of meteorology in two atmospheric simulations. Results show that Asian emission influences tracer distributions in its eastern downwind regions extending as far as Europe with major contributions in mid- and upper troposphere. On the western and eastern sides of the US, Asian contribution to annual average column burdens are 37% and 5% respectively with strong monthly variations. At an altitude of 10 km, these contributions are 75% and 25% respectively. North American emissions contribute more than 15% to annual average column burden and about 50% at 8 km altitude over the European region. Contributions from tropical African emissions are wide-spread in both the hemispheres. Differences in meteorological input cause non-uniform redistribution of tracer mass throughout the troposphere at all receptors. We also show that in model-model and model-data comparison, correlation analysis of tracer's spatial gradients provides an added measure of model's performance.
NASA Technical Reports Server (NTRS)
Orme, John S.; Gilyard, Glenn B.
1992-01-01
Integrated engine-airframe optimal control technology may significantly improve aircraft performance. This technology requires a reliable and accurate parameter estimator to predict unmeasured variables. To develop this technology base, NASA Dryden Flight Research Facility (Edwards, CA), McDonnell Aircraft Company (St. Louis, MO), and Pratt & Whitney (West Palm Beach, FL) have developed and flight-tested an adaptive performance seeking control system which optimizes the quasi-steady-state performance of the F-15 propulsion system. This paper presents flight and ground test evaluations of the propulsion system parameter estimation process used by the performance seeking control system. The estimator consists of a compact propulsion system model and an extended Kalman filter. The extended Laman filter estimates five engine component deviation parameters from measured inputs. The compact model uses measurements and Kalman-filter estimates as inputs to predict unmeasured propulsion parameters such as net propulsive force and fan stall margin. The ability to track trends and estimate absolute values of propulsion system parameters was demonstrated. For example, thrust stand results show a good correlation, especially in trends, between the performance seeking control estimated and measured thrust.
Self-Consistent Scheme for Spike-Train Power Spectra in Heterogeneous Sparse Networks.
Pena, Rodrigo F O; Vellmer, Sebastian; Bernardi, Davide; Roque, Antonio C; Lindner, Benjamin
2018-01-01
Recurrent networks of spiking neurons can be in an asynchronous state characterized by low or absent cross-correlations and spike statistics which resemble those of cortical neurons. Although spatial correlations are negligible in this state, neurons can show pronounced temporal correlations in their spike trains that can be quantified by the autocorrelation function or the spike-train power spectrum. Depending on cellular and network parameters, correlations display diverse patterns (ranging from simple refractory-period effects and stochastic oscillations to slow fluctuations) and it is generally not well-understood how these dependencies come about. Previous work has explored how the single-cell correlations in a homogeneous network (excitatory and inhibitory integrate-and-fire neurons with nearly balanced mean recurrent input) can be determined numerically from an iterative single-neuron simulation. Such a scheme is based on the fact that every neuron is driven by the network noise (i.e., the input currents from all its presynaptic partners) but also contributes to the network noise, leading to a self-consistency condition for the input and output spectra. Here we first extend this scheme to homogeneous networks with strong recurrent inhibition and a synaptic filter, in which instabilities of the previous scheme are avoided by an averaging procedure. We then extend the scheme to heterogeneous networks in which (i) different neural subpopulations (e.g., excitatory and inhibitory neurons) have different cellular or connectivity parameters; (ii) the number and strength of the input connections are random (Erdős-Rényi topology) and thus different among neurons. In all heterogeneous cases, neurons are lumped in different classes each of which is represented by a single neuron in the iterative scheme; in addition, we make a Gaussian approximation of the input current to the neuron. These approximations seem to be justified over a broad range of parameters as indicated by comparison with simulation results of large recurrent networks. Our method can help to elucidate how network heterogeneity shapes the asynchronous state in recurrent neural networks.
Cross-correlation of heartbeat and respiration rhythms
NASA Astrophysics Data System (ADS)
Capurro, A.; Malta, C. P.; Diambra, L.; Contreras, P.; Migliaro, E. R.
2005-10-01
The cross-correlation function between respiration and heart beat interval series shows that during metronomized breathing the heart beat follows the respiration more closely than during spontaneous breathing. We reproduced the heart beat interval modulations during metronomized breathing using a biophysical model of the sinoatrial node excited by an input signal formed by the recorded respiration. In the case of spontaneous breathing, a good agreement with the experimental data was obtained only by using an input signal formed by the sum of the recorded respiration and a realization of correlated noise. Metronomized breathing refers to the situation where a subject breathes following the rhythm of a metronome.
Input Correlations for Irradiation Creep of FeCrAl and SiC Based on In-Pile Halden Test Results
DOE Office of Scientific and Technical Information (OSTI.GOV)
Terrani, K. A.; Karlsen, T. M.; Yamamoto, Yukinori
2016-05-01
Swelling and creep behavior of wrought FeCrAl alloys and CVD-SiC, two candidate accident tolerant fuel cladding materials, are being examined using in-pile tests at the Halden reactor. The outcome of these tests are material property correlations that are inputs into fuel performance analysis tools. The results are discussed and compared with what is available in literature from irradiation experiments in other reactors or out-of-pile tests. Specific recommendation on what correlations should be used for swelling, thermal, and irradiation creep for each material are provided in this document.
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Loess as an environmental archive of atmospheric trace element deposition
NASA Astrophysics Data System (ADS)
Blazina, T.; Winkel, L. H.
2013-12-01
Environmental archives such as ice cores, lake sediment cores, and peat cores have been used extensively to reconstruct past atmospheric deposition of trace elements. These records have provided information about how anthropogenic activities such as mining and fossil fuel combustion have disturbed the natural cycles of various atmospherically transported trace elements (e.g. Pb, Hg and Se). While these records are invaluable for tracing human impacts on such trace elements, they often provide limited information about the long term natural cycles of these elements. An assumption of these records is that the observed variations in trace element input, prior to any assumed anthropogenic perturbations, represent the full range of natural variations. However, records such as those mentioned above which extend back to a maximum of ~400kyr may not capture the potentially large variations of trace element input occurring over millions of years. Windblown loess sediments, often representing atmospheric deposition over time scales >1Ma, are the most widely distributed terrestrial sediments on Earth. These deposits have been used extensively to reconstruct continental climate variability throughout the Quaternary and late Neogene periods. In addition to being a valuable record of continental climate change, loess deposits may represent a long term environmental archive of atmospheric trace element deposition and may be combined with paleoclimate records to elucidate how fluctuations in climate have impacted the natural cycle of such elements. Our research uses the loess-paleosol deposits on the Chinese Loess Plateau (CLP) to quantify how atmospheric deposition of trace elements has fluctuated in central China over the past 6.8Ma. The CLP has been used extensively to reconstruct past changes of East Asian monsoon system (EAM). We present a suite of trace element concentration records (e.g. Pb, Hg, and Se) from the CLP which exemplifies how loess deposits can be used as an environmental archive to reconstruct long term natural variations in atmospheric trace element input. By comparing paleomonsoon proxy data with geochemical data we can directly correlate variations in atmospheric trace element input to fluctuations in the EAM. For example we are able to link Se input into the CLP to EAM derived precipitation. In interglacial climatic periods from 2.3-1.56Ma and 1.50-1.29Ma, we find very strong positive correlations between Se concentration and the summer monsoon index, a proxy for effective precipitation. In later interglacial periods from 1.26-0.83Ma and 0.78-0.16Ma, we find dust input plays a greater role. Our findings demonstrate that the CLP is a valuable environmental archive of atmospheric trace element deposition and suggest that other loess deposits worldwide may serve as useful records for investigating long term natural variations in atmospheric trace element cycling.
Correlation and agreement: overview and clarification of competing concepts and measures.
Liu, Jinyuan; Tang, Wan; Chen, Guanqin; Lu, Yin; Feng, Changyong; Tu, Xin M
2016-04-25
Agreement and correlation are widely-used concepts that assess the association between variables. Although similar and related, they represent completely different notions of association. Assessing agreement between variables assumes that the variables measure the same construct, while correlation of variables can be assessed for variables that measure completely different constructs. This conceptual difference requires the use of different statistical methods, and when assessing agreement or correlation, the statistical method may vary depending on the distribution of the data and the interest of the investigator. For example, the Pearson correlation, a popular measure of correlation between continuous variables, is only informative when applied to variables that have linear relationships; it may be non-informative or even misleading when applied to variables that are not linearly related. Likewise, the intraclass correlation, a popular measure of agreement between continuous variables, may not provide sufficient information for investigators if the nature of poor agreement is of interest. This report reviews the concepts of agreement and correlation and discusses differences in the application of several commonly used measures.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
Global growth and stability of agricultural yield decrease with pollinator dependence
Garibaldi, Lucas A.; Aizen, Marcelo A.; Klein, Alexandra M.; Cunningham, Saul A.; Harder, Lawrence D.
2011-01-01
Human welfare depends on the amount and stability of agricultural production, as determined by crop yield and cultivated area. Yield increases asymptotically with the resources provided by farmers’ inputs and environmentally sensitive ecosystem services. Declining yield growth with increased inputs prompts conversion of more land to cultivation, but at the risk of eroding ecosystem services. To explore the interdependence of agricultural production and its stability on ecosystem services, we present and test a general graphical model, based on Jensen's inequality, of yield–resource relations and consider implications for land conversion. For the case of animal pollination as a resource influencing crop yield, this model predicts that incomplete and variable pollen delivery reduces yield mean and stability (inverse of variability) more for crops with greater dependence on pollinators. Data collected by the Food and Agriculture Organization of the United Nations during 1961–2008 support these predictions. Specifically, crops with greater pollinator dependence had lower mean and stability in relative yield and yield growth, despite global yield increases for most crops. Lower yield growth was compensated by increased land cultivation to enhance production of pollinator-dependent crops. Area stability also decreased with pollinator dependence, as it correlated positively with yield stability among crops. These results reveal that pollen limitation hinders yield growth of pollinator-dependent crops, decreasing temporal stability of global agricultural production, while promoting compensatory land conversion to agriculture. Although we examined crop pollination, our model applies to other ecosystem services for which the benefits to human welfare decelerate as the maximum is approached. PMID:21422295
NASA Astrophysics Data System (ADS)
Ficchì, Andrea; Perrin, Charles; Andréassian, Vazken
2016-07-01
Hydro-climatic data at short time steps are considered essential to model the rainfall-runoff relationship, especially for short-duration hydrological events, typically flash floods. Also, using fine time step information may be beneficial when using or analysing model outputs at larger aggregated time scales. However, the actual gain in prediction efficiency using short time-step data is not well understood or quantified. In this paper, we investigate the extent to which the performance of hydrological modelling is improved by short time-step data, using a large set of 240 French catchments, for which 2400 flood events were selected. Six-minute rain gauge data were available and the GR4 rainfall-runoff model was run with precipitation inputs at eight different time steps ranging from 6 min to 1 day. Then model outputs were aggregated at seven different reference time scales ranging from sub-hourly to daily for a comparative evaluation of simulations at different target time steps. Three classes of model performance behaviour were found for the 240 test catchments: (i) significant improvement of performance with shorter time steps; (ii) performance insensitivity to the modelling time step; (iii) performance degradation as the time step becomes shorter. The differences between these groups were analysed based on a number of catchment and event characteristics. A statistical test highlighted the most influential explanatory variables for model performance evolution at different time steps, including flow auto-correlation, flood and storm duration, flood hydrograph peakedness, rainfall-runoff lag time and precipitation temporal variability.
Quantitative assessment of multiple sclerosis lesion load using CAD and expert input
NASA Astrophysics Data System (ADS)
Gertych, Arkadiusz; Wong, Alexis; Sangnil, Alan; Liu, Brent J.
2008-03-01
Multiple sclerosis (MS) is a frequently encountered neurological disease with a progressive but variable course affecting the central nervous system. Outline-based lesion quantification in the assessment of lesion load (LL) performed on magnetic resonance (MR) images is clinically useful and provides information about the development and change reflecting overall disease burden. Methods of LL assessment that rely on human input are tedious, have higher intra- and inter-observer variability and are more time-consuming than computerized automatic (CAD) techniques. At present it seems that methods based on human lesion identification preceded by non-interactive outlining by CAD are the best LL quantification strategies. We have developed a CAD that automatically quantifies MS lesions, displays 3-D lesion map and appends radiological findings to original images according to current DICOM standard. CAD is also capable to display and track changes and make comparison between patient's separate MRI studies to determine disease progression. The findings are exported to a separate imaging tool for review and final approval by expert. Capturing and standardized archiving of manual contours is also implemented. Similarity coefficients calculated from quantities of LL in collected exams show a good correlation of CAD-derived results vs. those incorporated as expert's reading. Combining the CAD approach with an expert interaction may impact to the diagnostic work-up of MS patients because of improved reproducibility in LL assessment and reduced time for single MR or comparative exams reading. Inclusion of CAD-generated outlines as DICOM-compliant overlays into the image data can serve as a better reference in MS progression tracking.
Design and analysis of composite structures with stress concentrations
NASA Technical Reports Server (NTRS)
Garbo, S. P.
1983-01-01
An overview of an analytic procedure which can be used to provide comprehensive stress and strength analysis of composite structures with stress concentrations is given. The methodology provides designer/analysts with a user-oriented procedure which, within acceptable engineering accuracy, accounts for the effects of a wide range of application design variables. The procedure permits the strength of arbitrary laminate constructions under general bearing/bypass load conditions to be predicted with only unnotched unidirectional strength and stiffness input data required. Included is a brief discussion of the relevancy of this analysis to the design of primary aircraft structure; an overview of the analytic procedure with theory/test correlations; and an example of the use and interaction of this strength analysis relative to the design of high-load transfer bolted composite joints.
Validation of Metrics as Error Predictors
NASA Astrophysics Data System (ADS)
Mendling, Jan
In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.
Scaling of Directed Dynamical Small-World Networks with Random Responses
NASA Astrophysics Data System (ADS)
Zhu, Chen-Ping; Xiong, Shi-Jie; Tian, Ying-Jie; Li, Nan; Jiang, Ke-Sheng
2004-05-01
A dynamical model of small-world networks, with directed links which describe various correlations in social and natural phenomena, is presented. Random responses of sites to the input message are introduced to simulate real systems. The interplay of these ingredients results in the collective dynamical evolution of a spinlike variable S(t) of the whole network. The global average spreading length
Vocal exploration is locally regulated during song learning
Ravbar, Primoz; Parra, Lucas C.; Lipkind, Dina; Tchernichovski, Ofer
2012-01-01
Exploratory variability is essential for sensory-motor learning, but it is not known how and at what time scales it is regulated. We manipulated song learning in zebra finches to experimentally control the requirements for vocal exploration in different parts of their song. We first trained birds to perform a one-syllable song, and once they mastered it we added a new syllable to the song model. Remarkably, when practicing the modified song, birds rapidly alternated between high and low acoustic variability to confine vocal exploration to the newly added syllable. Further, even within syllables, acoustic variability changed independently across song elements that were only milliseconds apart. Analysis of the entire vocal output during learning revealed that the variability of each song element decreased as it approached the target, correlating with momentary local distance from the target and less so with the overall distance. We conclude that vocal error is computed locally in sub-syllabic time scales and that song elements can be learned and crystalized independently. Songbirds have dedicated brain circuitry for vocal babbling in the anterior forebrain pathway (AFP), which generates exploratory song patterns that drive premotor neurons at the song nucleus RA (robust nucleus of the arcopallium). We hypothesize that either AFP adjusts the gain of vocal exploration in fine time scales, or that the sensitivity of RA premotor neurons to AFP/HVC inputs varies across song elements. PMID:22399765
Biostatistics Series Module 6: Correlation and Linear Regression.
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient ( r ). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r 2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation ( y = a + bx ), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous.
Biostatistics Series Module 6: Correlation and Linear Regression
Hazra, Avijit; Gogtay, Nithya
2016-01-01
Correlation and linear regression are the most commonly used techniques for quantifying the association between two numeric variables. Correlation quantifies the strength of the linear relationship between paired variables, expressing this as a correlation coefficient. If both variables x and y are normally distributed, we calculate Pearson's correlation coefficient (r). If normality assumption is not met for one or both variables in a correlation analysis, a rank correlation coefficient, such as Spearman's rho (ρ) may be calculated. A hypothesis test of correlation tests whether the linear relationship between the two variables holds in the underlying population, in which case it returns a P < 0.05. A 95% confidence interval of the correlation coefficient can also be calculated for an idea of the correlation in the population. The value r2 denotes the proportion of the variability of the dependent variable y that can be attributed to its linear relation with the independent variable x and is called the coefficient of determination. Linear regression is a technique that attempts to link two correlated variables x and y in the form of a mathematical equation (y = a + bx), such that given the value of one variable the other may be predicted. In general, the method of least squares is applied to obtain the equation of the regression line. Correlation and linear regression analysis are based on certain assumptions pertaining to the data sets. If these assumptions are not met, misleading conclusions may be drawn. The first assumption is that of linear relationship between the two variables. A scatter plot is essential before embarking on any correlation-regression analysis to show that this is indeed the case. Outliers or clustering within data sets can distort the correlation coefficient value. Finally, it is vital to remember that though strong correlation can be a pointer toward causation, the two are not synonymous. PMID:27904175
Multifunction Imaging and Spectroscopic Instrument
NASA Technical Reports Server (NTRS)
Mouroulis, Pantazis
2004-01-01
A proposed optoelectronic instrument would perform several different spectroscopic and imaging functions that, heretofore, have been performed by separate instruments. The functions would be reflectance, fluorescence, and Raman spectroscopies; variable-color confocal imaging at two different resolutions; and wide-field color imaging. The instrument was conceived for use in examination of minerals on remote planets. It could also be used on Earth to characterize material specimens. The conceptual design of the instrument emphasizes compactness and economy, to be achieved largely through sharing of components among subsystems that perform different imaging and spectrometric functions. The input optics for the various functions would be mounted in a single optical head. With the exception of a targeting lens, the input optics would all be aimed at the same spot on a specimen, thereby both (1) eliminating the need to reposition the specimen to perform different imaging and/or spectroscopic observations and (2) ensuring that data from such observations can be correlated with respect to known positions on the specimen. The figure schematically depicts the principal components and subsystems of the instrument. The targeting lens would collect light into a multimode optical fiber, which would guide the light through a fiber-selection switch to a reflection/ fluorescence spectrometer. The switch would have four positions, enabling selection of spectrometer input from the targeting lens, from either of one or two multimode optical fibers coming from a reflectance/fluorescence- microspectrometer optical head, or from a dark calibration position (no fiber). The switch would be the only moving part within the instrument.
Propagating waves can explain irregular neural dynamics.
Keane, Adam; Gong, Pulin
2015-01-28
Cortical neurons in vivo fire quite irregularly. Previous studies about the origin of such irregular neural dynamics have given rise to two major models: a balanced excitation and inhibition model, and a model of highly synchronized synaptic inputs. To elucidate the network mechanisms underlying synchronized synaptic inputs and account for irregular neural dynamics, we investigate a spatially extended, conductance-based spiking neural network model. We show that propagating wave patterns with complex dynamics emerge from the network model. These waves sweep past neurons, to which they provide highly synchronized synaptic inputs. On the other hand, these patterns only emerge from the network with balanced excitation and inhibition; our model therefore reconciles the two major models of irregular neural dynamics. We further demonstrate that the collective dynamics of propagating wave patterns provides a mechanistic explanation for a range of irregular neural dynamics, including the variability of spike timing, slow firing rate fluctuations, and correlated membrane potential fluctuations. In addition, in our model, the distributions of synaptic conductance and membrane potential are non-Gaussian, consistent with recent experimental data obtained using whole-cell recordings. Our work therefore relates the propagating waves that have been widely observed in the brain to irregular neural dynamics. These results demonstrate that neural firing activity, although appearing highly disordered at the single-neuron level, can form dynamical coherent structures, such as propagating waves at the population level. Copyright © 2015 the authors 0270-6474/15/351591-15$15.00/0.
NASA Astrophysics Data System (ADS)
Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.
2017-05-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.
NASA Astrophysics Data System (ADS)
Dong, Leng; Chen, Yan; Dias, Sarah; Stone, William; Dias, Joseph; Rout, John; Gale, Alastair G.
2017-03-01
Visual search techniques and FROC analysis have been widely used in radiology to understand medical image perceptual behaviour and diagnostic performance. The potential of exploiting the advantages of both methodologies is of great interest to medical researchers. In this study, eye tracking data of eight dental practitioners was investigated. The visual search measures and their analyses are considered here. Each participant interpreted 20 dental radiographs which were chosen by an expert dental radiologist. Various eye movement measurements were obtained based on image area of interest (AOI) information. FROC analysis was then carried out by using these eye movement measurements as a direct input source. The performance of FROC methods using different input parameters was tested. The results showed that there were significant differences in FROC measures, based on eye movement data, between groups with different experience levels. Namely, the area under the curve (AUC) score evidenced higher values for experienced group for the measurements of fixation and dwell time. Also, positive correlations were found for AUC scores between the eye movement data conducted FROC and rating based FROC. FROC analysis using eye movement measurements as input variables can act as a potential performance indicator to deliver assessment in medical imaging interpretation and assess training procedures. Visual search data analyses lead to new ways of combining eye movement data and FROC methods to provide an alternative dimension to assess performance and visual search behaviour in the area of medical imaging perceptual tasks.
NASA Astrophysics Data System (ADS)
Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.
2017-12-01
Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
Correlated uncertainties in Monte Carlo reaction rate calculations
NASA Astrophysics Data System (ADS)
Longland, Richard
2017-07-01
Context. Monte Carlo methods have enabled nuclear reaction rates from uncertain inputs to be presented in a statistically meaningful manner. However, these uncertainties are currently computed assuming no correlations between the physical quantities that enter those calculations. This is not always an appropriate assumption. Astrophysically important reactions are often dominated by resonances, whose properties are normalized to a well-known reference resonance. This insight provides a basis from which to develop a flexible framework for including correlations in Monte Carlo reaction rate calculations. Aims: The aim of this work is to develop and test a method for including correlations in Monte Carlo reaction rate calculations when the input has been normalized to a common reference. Methods: A mathematical framework is developed for including correlations between input parameters in Monte Carlo reaction rate calculations. The magnitude of those correlations is calculated from the uncertainties typically reported in experimental papers, where full correlation information is not available. The method is applied to four illustrative examples: a fictional 3-resonance reaction, 27Al(p, γ)28Si, 23Na(p, α)20Ne, and 23Na(α, p)26Mg. Results: Reaction rates at low temperatures that are dominated by a few isolated resonances are found to minimally impacted by correlation effects. However, reaction rates determined from many overlapping resonances can be significantly affected. Uncertainties in the 23Na(α, p)26Mg reaction, for example, increase by up to a factor of 5. This highlights the need to take correlation effects into account in reaction rate calculations, and provides insight into which cases are expected to be most affected by them. The impact of correlation effects on nucleosynthesis is also investigated.
Mutual information against correlations in binary communication channels.
Pregowska, Agnieszka; Szczepanski, Janusz; Wajnryb, Eligiusz
2015-05-19
Explaining how the brain processing is so fast remains an open problem (van Hemmen JL, Sejnowski T., 2004). Thus, the analysis of neural transmission (Shannon CE, Weaver W., 1963) processes basically focuses on searching for effective encoding and decoding schemes. According to the Shannon fundamental theorem, mutual information plays a crucial role in characterizing the efficiency of communication channels. It is well known that this efficiency is determined by the channel capacity that is already the maximal mutual information between input and output signals. On the other hand, intuitively speaking, when input and output signals are more correlated, the transmission should be more efficient. A natural question arises about the relation between mutual information and correlation. We analyze the relation between these quantities using the binary representation of signals, which is the most common approach taken in studying neuronal processes of the brain. We present binary communication channels for which mutual information and correlation coefficients behave differently both quantitatively and qualitatively. Despite this difference in behavior, we show that the noncorrelation of binary signals implies their independence, in contrast to the case for general types of signals. Our research shows that the mutual information cannot be replaced by sheer correlations. Our results indicate that neuronal encoding has more complicated nature which cannot be captured by straightforward correlations between input and output signals once the mutual information takes into account the structure and patterns of the signals.
Tang, Yi; Sorenson, Jeff; Lanspa, Michael; Grissom, Colin K; Mathews, V J; Brown, Samuel M
2017-06-17
Severe sepsis and septic shock are often lethal syndromes, in which the autonomic nervous system may fail to maintain adequate blood pressure. Heart rate variability has been associated with outcomes in sepsis. Whether systolic blood pressure (SBP) variability is associated with clinical outcomes in septic patients is unknown. The propose of this study is to determine whether variability in SBP correlates with vasopressor independence and mortality among septic patients. We prospectively studied patients with severe sepsis or septic shock, admitted to an intensive care unit (ICU) with an arterial catheter. We analyzed SBP variability on the first 5-min window immediately following ICU admission. We performed principal component analysis of multidimensional complexity, and used the first principal component (PC 1 ) as input for Firth logistic regression, controlling for mean systolic pressure (SBP) in the primary analyses, and Acute Physiology and Chronic Health Evaluation (APACHE) II score or NEE dose in the ancillary analyses. Prespecified outcomes were vasopressor independence at 24 h (primary), and 28-day mortality (secondary). We studied 51 patients, 51% of whom achieved vasopressor independence at 24 h. Ten percent died at 28 days. PC 1 represented 26% of the variance in complexity measures. PC 1 was not associated with vasopressor independence on Firth logistic regression (OR 1.04; 95% CI: 0.93-1.16; p = 0.54), but was associated with 28-day mortality (OR 1.16, 95% CI: 1.01-1.35, p = 0.040). Early SBP variability appears to be associated with 28-day mortality in patients with severe sepsis and septic shock.
Mixed Signal Learning by Spike Correlation Propagation in Feedback Inhibitory Circuits
Hiratani, Naoki; Fukai, Tomoki
2015-01-01
The brain can learn and detect mixed input signals masked by various types of noise, and spike-timing-dependent plasticity (STDP) is the candidate synaptic level mechanism. Because sensory inputs typically have spike correlation, and local circuits have dense feedback connections, input spikes cause the propagation of spike correlation in lateral circuits; however, it is largely unknown how this secondary correlation generated by lateral circuits influences learning processes through STDP, or whether it is beneficial to achieve efficient spike-based learning from uncertain stimuli. To explore the answers to these questions, we construct models of feedforward networks with lateral inhibitory circuits and study how propagated correlation influences STDP learning, and what kind of learning algorithm such circuits achieve. We derive analytical conditions at which neurons detect minor signals with STDP, and show that depending on the origin of the noise, different correlation timescales are useful for learning. In particular, we show that non-precise spike correlation is beneficial for learning in the presence of cross-talk noise. We also show that by considering excitatory and inhibitory STDP at lateral connections, the circuit can acquire a lateral structure optimal for signal detection. In addition, we demonstrate that the model performs blind source separation in a manner similar to the sequential sampling approximation of the Bayesian independent component analysis algorithm. Our results provide a basic understanding of STDP learning in feedback circuits by integrating analyses from both dynamical systems and information theory. PMID:25910189
Evaluating variable rate fungicide applications for control of Sclerotinia
USDA-ARS?s Scientific Manuscript database
Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...
Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor
2018-01-01
Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Demand Side Variability, and Network Variability studies, including input data, processing programs, and... should include the product or product groups carried under each listed contract; (k) Spreadsheets and...
Confidence intervals in Flow Forecasting by using artificial neural networks
NASA Astrophysics Data System (ADS)
Panagoulia, Dionysia; Tsekouras, George
2014-05-01
One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.
NASA Astrophysics Data System (ADS)
McEathron, K. M.; Mitchell, M. J.; Zhang, L.
2013-07-01
Grass Pond watershed is located within the southwestern Adirondack Mountain region of New York State, USA. This region receives some of the highest rates of acidic deposition in North America and is particularly sensitive to acidic inputs due to many of its soils having shallow depths and being generally base poor. Differences in soil chemistry and tree species between seven subwatersheds were examined in relation to acid-base characteristics of the seven major streams that drain into Grass Pond. Mineral soil pH, stream water BCS (base-cation surplus) and pH exhibited a positive correlation with sugar maple basal area (p = 0.055; 0.48 and 0.39, respectively). Black cherry basal area was inversely correlated with stream water BCS, ANC (acid neutralizing capacity)c and NO3- (p = 0.23; 0.24 and 0.20, respectively). Sugar maple basal areas were positively associated with watershed characteristics associated with the neutralization of atmospheric acidic inputs while in contrast, black cherry basal areas showed opposite relationships to these same watershed characteristics. Canonical correspondence analysis indicated that black cherry had a distinctive relationship with forest floor chemistry apart from the other tree species, specifically a strong positive association with forest floor NH4, while sugar maple had a distinctive relationship with stream chemistry variables, specifically a strong positive association with stream water ANCc, BCS and pH. Our results provide evidence that sugar maple is acid-intolerant or calciphilic tree species and also demonstrate that black cherry is likely an acid-tolerant tree species.
Amplitude and dynamics of polarization-plane signaling in the central complex of the locust brain
Bockhorst, Tobias
2015-01-01
The polarization pattern of skylight provides a compass cue that various insect species use for allocentric orientation. In the desert locust, Schistocerca gregaria, a network of neurons tuned to the electric field vector (E-vector) angle of polarized light is present in the central complex of the brain. Preferred E-vector angles vary along slices of neuropils in a compasslike fashion (polarotopy). We studied how the activity in this polarotopic population is modulated in ways suited to control compass-guided locomotion. To this end, we analyzed tuning profiles using measures of correlation between spike rate and E-vector angle and, furthermore, tested for adaptation to stationary angles. The results suggest that the polarotopy is stabilized by antagonistic integration across neurons with opponent tuning. Downstream to the input stage of the network, responses to stationary E-vector angles adapted quickly, which may correlate with a tendency to steer a steady course previously observed in tethered flying locusts. By contrast, rotating E-vectors corresponding to changes in heading direction under a natural sky elicited nonadapting responses. However, response amplitudes were particularly variable at the output stage, covarying with the level of ongoing activity. Moreover, the responses to rotating E-vector angles depended on the direction of rotation in an anticipatory manner. Our observations support a view of the central complex as a substrate of higher-stage processing that could assign contextual meaning to sensory input for motor control in goal-driven behaviors. Parallels to higher-stage processing of sensory information in vertebrates are discussed. PMID:25609107