NASA Astrophysics Data System (ADS)
Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.
2016-04-01
The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.
Multiple-input multiple-output causal strategies for gene selection.
Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John
2011-11-25
Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.
Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration
USDA-ARS?s Scientific Manuscript database
Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...
Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli
Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.
2013-01-01
Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563
Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number
NASA Technical Reports Server (NTRS)
Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios
2016-01-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Role of updraft velocity in temporal variability of global cloud hydrometeor number
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...
2016-05-16
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less
Role of updraft velocity in temporal variability of global cloud hydrometeor number
NASA Astrophysics Data System (ADS)
Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios
2016-05-01
Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.
Statistics of optimal information flow in ensembles of regulatory motifs
NASA Astrophysics Data System (ADS)
Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan
2018-02-01
Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.
Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F
2018-01-01
The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.
Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S
2016-08-31
Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.
He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong
2016-03-01
In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Tigges, P; Kathmann, N; Engel, R R
1997-07-01
Though artificial neural networks (ANN) are excellent tools for pattern recognition problems when signal to noise ratio is low, the identification of decision relevant features for ANN input data is still a crucial issue. The experience of the ANN designer and the existing knowledge and understanding of the problem seem to be the only links for a specific construction. In the present study a backpropagation ANN based on modified raw data inputs showed encouraging results. Investigating the specific influences of prototypical input patterns on a specially designed ANN led to a new sparse and efficient input data presentation. This data coding obtained by a semiautomatic procedure combining existing expert knowledge and the internal representation structures of the raw data based ANN yielded a list of feature vectors, each representing the relevant information for saccade identification. The feature based ANN produced a reduction of the error rate of nearly 40% compared with the raw data ANN. An overall correct classification of 92% of so far unknown data was realized. The proposed method of extracting internal ANN knowledge for the production of a better input data representation is not restricted to EOG recordings, and could be used in various fields of signal analysis.
Electrical Advantages of Dendritic Spines
Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.
2012-01-01
Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875
Waldbusser, George G; Salisbury, Joseph E
2014-01-01
Multiple natural and anthropogenic processes alter the carbonate chemistry of the coastal zone in ways that either exacerbate or mitigate ocean acidification effects. Freshwater inputs and multiple acid-base reactions change carbonate chemistry conditions, sometimes synergistically. The shallow nature of these systems results in strong benthic-pelagic coupling, and marine invertebrates at different life history stages rely on both benthic and pelagic habitats. Carbonate chemistry in coastal systems can be highly variable, responding to processes with temporal modes ranging from seconds to centuries. Identifying scales of variability relevant to levels of biological organization requires a fuller characterization of both the frequency and magnitude domains of processes contributing to or reducing acidification in pelagic and benthic habitats. We review the processes that contribute to coastal acidification with attention to timescales of variability and habitats relevant to marine bivalves.
Computing the structural influence matrix for biological systems.
Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco
2016-06-01
We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.
Properties of Zero-Free Transfer Function Matrices
NASA Astrophysics Data System (ADS)
D. O. Anderson, Brian; Deistler, Manfred
Transfer functions of linear, time-invariant finite-dimensional systems with more outputs than inputs, as arise in factor analysis (for example in econometrics), have, for state-variable descriptions with generic entries in the relevant matrices, no finite zeros. This paper gives a number of characterizations of such systems (and indeed square discrete-time systems with no zeros), using state-variable, impulse response, and matrix-fraction descriptions. Key properties include the ability to recover the input values at any time from a bounded interval of output values, without any knowledge of an initial state, and an ability to verify the no-zero property in terms of a property of the impulse response coefficient matrices. Results are particularized to cases where the transfer function matrix in question may or may not have a zero at infinity or a zero at zero.
NASA Astrophysics Data System (ADS)
Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban
2017-01-01
Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.
The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pilch, Martin M.
Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less
Using Natural Language to Enhance Mission Effectiveness
NASA Technical Reports Server (NTRS)
Trujillo, Anna C.; Meszaros, Erica
2016-01-01
The availability of highly capable, yet relatively cheap, unmanned aerial vehicles (UAVs) is opening up new areas of use for hobbyists and for professional-related activities. The driving function of this research is allowing a non-UAV pilot, an operator, to define and manage a mission. This paper describes the preliminary usability measures of an interface that allows an operator to define the mission using speech to make inputs. An experiment was conducted to begin to enumerate the efficacy and user acceptance of using voice commands to define a multi-UAV mission and to provide high-level vehicle control commands such as "takeoff." The primary independent variable was input type - voice or mouse. The primary dependent variables consisted of the correctness of the mission parameter inputs and the time needed to make all inputs. Other dependent variables included NASA-TLX workload ratings and subjective ratings on a final questionnaire. The experiment required each subject to fill in an online form that contained comparable required information that would be needed for a package dispatcher to deliver packages. For each run, subjects typed in a simple numeric code for the package code. They then defined the initial starting position, the delivery location, and the return location using either pull-down menus or voice input. Voice input was accomplished using CMU Sphinx4-5prealpha for speech recognition. They then inputted the length of the package. These were the option fields. The subject had the system "Calculate Trajectory" and then "Takeoff" once the trajectory was calculated. Later, the subject used "Land" to finish the run. After the voice and mouse input blocked runs, subjects completed a NASA-TLX. At the conclusion of all runs, subjects completed a questionnaire asking them about their experience in inputting the mission parameters, and starting and stopping the mission using mouse and voice input. In general, the usability of voice commands is acceptable. With a relatively well-defined and simple vocabulary, the operator can input the vast majority of the mission parameters using simple, intuitive voice commands. However, voice input may be more applicable to initial mission specification rather than for critical commands such as the need to land immediately due to time and feedback constraints. It would also be convenient to retrieve relevant mission information using voice input. Therefore, further on-going research is looking at using intent from operator utterances to provide the relevant mission information to the operator. The information displayed will be inferred from the operator's utterances just before key phrases are spoken. Linguistic analysis of the context of verbal communication provides insight into the intended meaning of commonly heard phrases such as "What's it doing now?" Analyzing the semantic sphere surrounding these common phrases enables us to predict the operator's intent and supply the operator's desired information to the interface. This paper also describes preliminary investigations into the generation of the semantic space of UAV operation and the success at providing information to the interface based on the operator's utterances.
Nassios, Jason; Giesecke, James A
2018-04-01
Economic consequence analysis is one of many inputs to terrorism contingency planning. Computable general equilibrium (CGE) models are being used more frequently in these analyses, in part because of their capacity to accommodate high levels of event-specific detail. In modeling the potential economic effects of a hypothetical terrorist event, two broad sets of shocks are required: (1) physical impacts on observable variables (e.g., asset damage); (2) behavioral impacts on unobservable variables (e.g., investor uncertainty). Assembling shocks describing the physical impacts of a terrorist incident is relatively straightforward, since estimates are either readily available or plausibly inferred. However, assembling shocks describing behavioral impacts is more difficult. Values for behavioral variables (e.g., required rates of return) are typically inferred or estimated by indirect means. Generally, this has been achieved via reference to extraneous literature or ex ante surveys. This article explores a new method. We elucidate the magnitude of CGE-relevant structural shifts implicit in econometric evidence on terrorist incidents, with a view to informing future ex ante event assessments. Ex post econometric studies of terrorism by Blomberg et al. yield macro econometric equations that describe the response of observable economic variables (e.g., GDP growth) to terrorist incidents. We use these equations to determine estimates for relevant (unobservable) structural and policy variables impacted by terrorist incidents, using a CGE model of the United States. This allows us to: (i) compare values for these shifts with input assumptions in earlier ex ante CGE studies; and (ii) discuss how future ex ante studies can be informed by our analysis. © 2017 Society for Risk Analysis.
Biological control of appetite: A daunting complexity.
MacLean, Paul S; Blundell, John E; Mennella, Julie A; Batterham, Rachel L
2017-03-01
This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) titled "Self-Regulation of Appetite-It's Complicated," which focused on the biological aspects of appetite regulation. This review summarizes the key biological inputs of appetite regulation and their implications for body weight regulation. These discussions offer an update of the long-held, rigid perspective of an "adipocentric" biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. It is only beginning to be understood how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally considered is a common theme that pervaded the workshop discussions, which was individual variability. It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. © 2017 The Obesity Society.
Biological Control of Appetite: A Daunting Complexity
MacLean, Paul S.; Blundell, John E.; Mennella, Julie A.; Batterham, Rachel L.
2017-01-01
Objective This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) entitled, “Self-Regulation of Appetite, It's Complicated,” which focused on the biological aspects of appetite regulation. Methods Here we summarize the key biological inputs of appetite regulation and their implications for body weight regulation. Results These discussions offer an update of the long-held, rigid perspective of an “adipocentric” biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. We are only beginning to understand how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally, we consider a common theme that pervaded the workshop discussions, which was individual variability. Conclusions It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. PMID:28229538
NASA Astrophysics Data System (ADS)
Baroni, Gabriele; Zink, Matthias; Kumar, Rohini; Samaniego, Luis; Attinger, Sabine
2017-04-01
The advances in computer science and the availability of new detailed data-sets have led to a growing number of distributed hydrological models applied to finer and finer grid resolutions for larger and larger catchment areas. It was argued, however, that this trend does not necessarily guarantee better understanding of the hydrological processes or it is even not necessary for specific modelling applications. In the present study, this topic is further discussed in relation to the soil spatial heterogeneity and its effect on simulated hydrological state and fluxes. To this end, three methods are developed and used for the characterization of the soil heterogeneity at different spatial scales. The methods are applied at the soil map of the upper Neckar catchment (Germany), as example. The different soil realizations are assessed regarding their impact on simulated state and fluxes using the distributed hydrological model mHM. The results are analysed by aggregating the model outputs at different spatial scales based on the Representative Elementary Scale concept (RES) proposed by Refsgaard et al. (2016). The analysis is further extended in the present study by aggregating the model output also at different temporal scales. The results show that small scale soil variabilities are not relevant when the integrated hydrological responses are considered e.g., simulated streamflow or average soil moisture over sub-catchments. On the contrary, these small scale soil variabilities strongly affect locally simulated states and fluxes i.e., soil moisture and evapotranspiration simulated at the grid resolution. A clear trade-off is also detected by aggregating the model output by spatial and temporal scales. Despite the scale at which the soil variabilities are (or are not) relevant is not universal, the RES concept provides a simple and effective framework to quantify the predictive capability of distributed models and to identify the need for further model improvements e.g., finer resolution input. For this reason, the integration in this analysis of all the relevant input factors (e.g., precipitation, vegetation, geology) could provide a strong support for the definition of the right scale for each specific model application. In this context, however, the main challenge for a proper model assessment will be the correct characterization of the spatio- temporal variability of each input factor. Refsgaard, J.C., Højberg, A.L., He, X., Hansen, A.L., Rasmussen, S.H., Stisen, S., 2016. Where are the limits of model predictive capabilities?: Representative Elementary Scale - RES. Hydrol. Process. doi:10.1002/hyp.11029
Soil moisture monitoring for crop management
NASA Astrophysics Data System (ADS)
Boyd, Dale
2015-07-01
The 'Risk management through soil moisture monitoring' project has demonstrated the capability of current technology to remotely monitor and communicate real time soil moisture data. The project investigated whether capacitance probes would assist making informed pre- and in-crop decisions. Crop potential and cropping inputs are increasingly being subject to greater instability and uncertainty due to seasonal variability. In a targeted survey of those who received regular correspondence from the Department of Primary Industries it was found that i) 50% of the audience found the information generated relevant for them and less than 10% indicted with was not relevant; ii) 85% have improved their knowledge/ability to assess soil moisture compared to prior to the project, with the most used indicator of soil moisture still being rain fall records; and iii) 100% have indicated they will continue to use some form of the technology to monitor soil moisture levels in the future. It is hoped that continued access to this information will assist informed input decisions. This will minimise inputs in low decile years with a low soil moisture base and maximise yield potential in more favourable conditions based on soil moisture and positive seasonal forecasts
Urdahl, Hege; Manca, Andrea; Sculpher, Mark J
2008-01-01
Background To support decision making many countries have now introduced some formal assessment process to evaluate whether health technologies represent good ‘value for money’. These often take the form of decision models which can be used to explore elements of importance to generalisability of study results across clinical settings and jurisdictions. The objectives of the present review were to assess: (i) whether the published studies clearly defined the decision-making audience for the model; (ii) the transparency of the reporting in terms of study question, structure and data inputs; (iii) the relevance of the data inputs used in the model to the stated decision-maker or jurisdiction; and (iv) how fully the robustness of the model's results to variation in data inputs between locations was assessed. Methods Articles reporting decision-analytic models in the area of osteoporosis were assessed to establish the extent to which the information provided enabled decision makers in different countries/jurisdictions to fully appreciate the variability of results according to location, and the relevance to their own. Results Of the 18 articles included in the review, only three explicitly stated the decision-making audience. It was not possible to infer a decision-making audience in eight studies. Target population was well reported, as was resource and cost data, and clinical data used for estimates of relative risk reduction. However, baseline risk was rarely adapted to the relevant jurisdiction, and when no decision-maker was explicit it was difficult to assess whether the reported cost and resource use data was in fact relevant. A few studies used sensitivity analysis to explore elements of generalisability, such as compliance rates and baseline fracture risk rates, although such analyses were generally restricted to evaluating parameter uncertainty. Conclusion This review found that variability in cost-effectiveness across locations is addressed to a varying extent in modelling studies in the field of osteoporosis, limiting their use for decision-makers across different locations. Transparency of reporting is expected to increase as methodology develops, and decision-makers publish “reference case” type guidance. PMID:17129074
Hippert, Henrique S; Taylor, James W
2010-04-01
Artificial neural networks have frequently been proposed for electricity load forecasting because of their capabilities for the nonlinear modelling of large multivariate data sets. Modelling with neural networks is not an easy task though; two of the main challenges are defining the appropriate level of model complexity, and choosing the input variables. This paper evaluates techniques for automatic neural network modelling within a Bayesian framework, as applied to six samples containing daily load and weather data for four different countries. We analyse input selection as carried out by the Bayesian 'automatic relevance determination', and the usefulness of the Bayesian 'evidence' for the selection of the best structure (in terms of number of neurones), as compared to methods based on cross-validation. Copyright 2009 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao
2017-03-01
Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.
Do humans make good decisions?
Summerfield, Christopher; Tsetsos, Konstantinos
2014-01-01
Human performance on perceptual classification tasks approaches that of an ideal observer, but economic decisions are often inconsistent and intransitive, with preferences reversing according to the local context. We discuss the view that suboptimal choices may result from the efficient coding of decision-relevant information, a strategy that allows expected inputs to be processed with higher gain than unexpected inputs. Efficient coding leads to ‘robust’ decisions that depart from optimality but maximise the information transmitted by a limited-capacity system in a rapidly-changing world. We review recent work showing that when perceptual environments are variable or volatile, perceptual decisions exhibit the same suboptimal context-dependence as economic choices, and propose a general computational framework that accounts for findings across the two domains. PMID:25488076
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-01-01
Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598
Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha
2007-08-23
The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.
When Can Information from Ordinal Scale Variables Be Integrated?
ERIC Educational Resources Information Center
Kemp, Simon; Grace, Randolph C.
2010-01-01
Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…
Marmarelis, Vasilis Z.; Zanos, Theodoros P.; Berger, Theodore W.
2010-01-01
This paper presents a new modeling approach for neural systems with point-process (spike) inputs and outputs that utilizes Boolean operators (i.e. modulo 2 multiplication and addition that correspond to the logical AND and OR operations respectively, as well as the AND_NOT logical operation representing inhibitory effects). The form of the employed mathematical models is akin to a “Boolean-Volterra” model that contains the product terms of all relevant input lags in a hierarchical order, where terms of order higher than first represent nonlinear interactions among the various lagged values of each input point-process or among lagged values of various inputs (if multiple inputs exist) as they reflect on the output. The coefficients of this Boolean-Volterra model are also binary variables that indicate the presence or absence of the respective term in each specific model/system. Simulations are used to explore the properties of such models and the feasibility of their accurate estimation from short data-records in the presence of noise (i.e. spurious spikes). The results demonstrate the feasibility of obtaining reliable estimates of such models, with excitatory and inhibitory terms, in the presence of considerable noise (spurious spikes) in the outputs and/or the inputs in a computationally efficient manner. A pilot application of this approach to an actual neural system is presented in the companion paper (Part II). PMID:19517238
Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan
2007-01-01
Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.
JETSPIN: A specific-purpose open-source software for simulations of nanofiber electrospinning
NASA Astrophysics Data System (ADS)
Lauricella, Marco; Pontrelli, Giuseppe; Coluzza, Ivan; Pisignano, Dario; Succi, Sauro
2015-12-01
We present the open-source computer program JETSPIN, specifically designed to simulate the electrospinning process of nanofibers. Its capabilities are shown with proper reference to the underlying model, as well as a description of the relevant input variables and associated test-case simulations. The various interactions included in the electrospinning model implemented in JETSPIN are discussed in detail. The code is designed to exploit different computational architectures, from single to parallel processor workstations. This paper provides an overview of JETSPIN, focusing primarily on its structure, parallel implementations, functionality, performance, and availability.
NASA Astrophysics Data System (ADS)
Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish
2018-06-01
Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.
NASA Astrophysics Data System (ADS)
Whitehead, James Joshua
The analysis documented herein provides an integrated approach for the conduct of optimization under uncertainty (OUU) using Monte Carlo Simulation (MCS) techniques coupled with response surface-based methods for characterization of mixture-dependent variables. This novel methodology provides an innovative means of conducting optimization studies under uncertainty in propulsion system design. Analytic inputs are based upon empirical regression rate information obtained from design of experiments (DOE) mixture studies utilizing a mixed oxidizer hybrid rocket concept. Hybrid fuel regression rate was selected as the target response variable for optimization under uncertainty, with maximization of regression rate chosen as the driving objective. Characteristic operational conditions and propellant mixture compositions from experimental efforts conducted during previous foundational work were combined with elemental uncertainty estimates as input variables. Response surfaces for mixture-dependent variables and their associated uncertainty levels were developed using quadratic response equations incorporating single and two-factor interactions. These analysis inputs, response surface equations and associated uncertainty contributions were applied to a probabilistic MCS to develop dispersed regression rates as a function of operational and mixture input conditions within design space. Illustrative case scenarios were developed and assessed using this analytic approach including fully and partially constrained operational condition sets over all of design mixture space. In addition, optimization sets were performed across an operationally representative region in operational space and across all investigated mixture combinations. These scenarios were selected as representative examples relevant to propulsion system optimization, particularly for hybrid and solid rocket platforms. Ternary diagrams, including contour and surface plots, were developed and utilized to aid in visualization. The concept of Expanded-Durov diagrams was also adopted and adapted to this study to aid in visualization of uncertainty bounds. Regions of maximum regression rate and associated uncertainties were determined for each set of case scenarios. Application of response surface methodology coupled with probabilistic-based MCS allowed for flexible and comprehensive interrogation of mixture and operating design space during optimization cases. Analyses were also conducted to assess sensitivity of uncertainty to variations in key elemental uncertainty estimates. The methodology developed during this research provides an innovative optimization tool for future propulsion design efforts.
Solis-Paredes, Mario; Estrada-Gutierrez, Guadalupe; Perichart-Perera, Otilia; Montoya-Estrada, Araceli; Guzmán-Huerta, Mario; Borboa-Olivares, Héctor; Bravo-Flores, Eyerahi; Cardona-Pérez, Arturo; Zaga-Clavellina, Veronica; Garcia-Latorre, Ethel; Gonzalez-Perez, Gabriela; Hernández-Pérez, José Alfredo; Irles, Claudine
2017-12-28
Maternal obesity has been related to adverse neonatal outcomes and fetal programming. Oxidative stress and adipokines are potential biomarkers in such pregnancies; thus, the measurement of these molecules has been considered critical. Therefore, we developed artificial neural network (ANN) models based on maternal weight status and clinical data to predict reliable maternal blood concentrations of these biomarkers at the end of pregnancy. Adipokines (adiponectin, leptin, and resistin), and DNA, lipid and protein oxidative markers (8-oxo-2'-deoxyguanosine, malondialdehyde and carbonylated proteins, respectively) were assessed in blood of normal weight, overweight and obese women in the third trimester of pregnancy. A Back-propagation algorithm was used to train ANN models with four input variables (age, pre-gestational body mass index (p-BMI), weight status and gestational age). ANN models were able to accurately predict all biomarkers with regression coefficients greater than R² = 0.945. P-BMI was the most significant variable for estimating adiponectin and carbonylated proteins concentrations (37%), while gestational age was the most relevant variable to predict resistin and malondialdehyde (34%). Age, gestational age and p-BMI had the same significance for leptin values. Finally, for 8-oxo-2'-deoxyguanosine prediction, the most significant variable was age (37%). These models become relevant to improve clinical and nutrition interventions in prenatal care.
Acuña, Gonzalo; Ramirez, Cristian; Curilem, Millaray
2014-01-01
The lack of sensors for some relevant state variables in fermentation processes can be coped by developing appropriate software sensors. In this work, NARX-ANN, NARMAX-ANN, NARX-SVM and NARMAX-SVM models are compared when acting as software sensors of biomass concentration for a solid substrate cultivation (SSC) process. Results show that NARMAX-SVM outperforms the other models with an SMAPE index under 9 for a 20 % amplitude noise. In addition, NARMAX models perform better than NARX models under the same noise conditions because of their better predictive capabilities as they include prediction errors as inputs. In the case of perturbation of initial conditions of the autoregressive variable, NARX models exhibited better convergence capabilities. This work also confirms that a difficult to measure variable, like biomass concentration, can be estimated on-line from easy to measure variables like CO₂ and O₂ using an adequate software sensor based on computational intelligence techniques.
Context effects on second-language learning of tonal contrasts.
Chang, Charles B; Bowles, Anita R
2015-12-01
Studies of lexical tone learning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tone learning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tone learning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.
Valenza, Gaetano; Citi, Luca; Gentili, Claudio; Lanata, Antonio; Scilingo, Enzo Pasquale; Barbieri, Riccardo
2015-01-01
The analysis of cognitive and autonomic responses to emotionally relevant stimuli could provide a viable solution for the automatic recognition of different mood states, both in normal and pathological conditions. In this study, we present a methodological application describing a novel system based on wearable textile technology and instantaneous nonlinear heart rate variability assessment, able to characterize the autonomic status of bipolar patients by considering only electrocardiogram recordings. As a proof of this concept, our study presents results obtained from eight bipolar patients during their normal daily activities and being elicited according to a specific emotional protocol through the presentation of emotionally relevant pictures. Linear and nonlinear features were computed using a novel point-process-based nonlinear autoregressive integrative model and compared with traditional algorithmic methods. The estimated indices were used as the input of a multilayer perceptron to discriminate the depressive from the euthymic status. Results show that our system achieves much higher accuracy than the traditional techniques. Moreover, the inclusion of instantaneous higher order spectra features significantly improves the accuracy in successfully recognizing depression from euthymia.
A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.
Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P
2014-12-15
Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.
ENSO detection and use to inform the operation of large scale water systems
NASA Astrophysics Data System (ADS)
Pham, Vuong; Giuliani, Matteo; Castelletti, Andrea
2016-04-01
El Nino Southern Oscillation (ENSO) is a large-scale, coupled ocean-atmosphere phenomenon occurring in the tropical Pacific Ocean, and is considered one of the most significant factors causing hydro-climatic anomalies throughout the world. Water systems operations could benefit from a better understanding of this global phenomenon, which has the potential for enhancing the accuracy and lead-time of long-range streamflow predictions. In turn, these are key to design interannual water transfers in large scale water systems to contrast increasingly frequent extremes induced by changing climate. Despite the ENSO teleconnection is well defined in some locations such as Western USA and Australia, there is no consensus on how it can be detected and used in other river basins, particularly in Europe, Africa, and Asia. In this work, we contribute a general framework relying on Input Variable Selection techniques for detecting ENSO teleconnection and using this information for improving water reservoir operations. Core of our procedure is the Iterative Input variable Selection (IIS) algorithm, which is employed to find the most relevant determinants of streamflow variability for deriving predictive models based on the selected inputs as well as to find the most valuable information for conditioning operating decisions. Our framework is applied to the multipurpose operations of the Hoa Binh reservoir in the Red River basin (Vietnam), taking into account hydropower production, water supply for irrigation, and flood mitigation during the monsoon season. Numerical results show that our framework is able to quantify the relationship between the ENSO fluctuations and the Red River basin hydrology. Moreover, we demonstrate that such ENSO teleconnection represents valuable information for improving the operations of Hoa Binh reservoir.
Bottom-up and Top-down Input Augment the Variability of Cortical Neurons
Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.
2016-01-01
SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459
Troutman, Brent M.
1982-01-01
Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.
Quantum teleportation of nonclassical wave packets: An effective multimode theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki
2011-07-15
We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.
Measurements of acoustic impedance at the input to the occluded ear canal.
Larson, V D; Nelson, J A; Cooper, W A; Egolf, D P
1993-01-01
Multi-frequency (multi-component) acoustic impedance measurements may evolve into a sensitive technique for the remote detection of aural pathologies. Such data are also relevant to models used in hearing aid design and could be an asset to the hearing aid prescription and fitting process. This report describes the development and use of a broad-band procedure which acquires impedance data in 20 Hz intervals and describes a comparison of data collected at two sites by different investigators. Mean data were in excellent agreement, and an explanation for a single case of extreme normal variability is presented.
Low-dimensional chaos in magnetospheric activity from AE time series
NASA Technical Reports Server (NTRS)
Vassiliadis, D. V.; Sharma, A. S.; Eastman, T. E.; Papadopoulos, K.
1990-01-01
The magnetospheric response to the solar-wind input, as represented by the time-series measurements of the auroral electrojet (AE) index, has been examined using phase-space reconstruction techniques. The system was found to behave as a low-dimensional chaotic system with a fractal dimension of 3.6 and has Kolmogorov entropy less than 0.2/min. These indicate that the dynamics of the system can be adequately described by four independent variables, and that the corresponding intrinsic time scale is of the order of 5 min. The relevance of the results to magnetospheric modeling is discussed.
NASA Astrophysics Data System (ADS)
Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo
2017-03-01
The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.
Variance-based interaction index measuring heteroscedasticity
NASA Astrophysics Data System (ADS)
Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom
2016-06-01
This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.
Input-variable sensitivity assessment for sediment transport relations
NASA Astrophysics Data System (ADS)
Fernández, Roberto; Garcia, Marcelo H.
2017-09-01
A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGraw, David; Hershey, Ronald L.
Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less
A Multifactor Approach to Research in Instructional Technology.
ERIC Educational Resources Information Center
Ragan, Tillman J.
In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…
Characteristics and Innovations in American Education of Relevance for Indian Education.
ERIC Educational Resources Information Center
Stambler, Moses
American responses to educational problems faced around the globe can serve as models for developing nations. The following characteristics of American education with particular relevance for education in developing nations have been organized as inputs, structures and strategies, and outputs. Inputs to the system of American education, defined in…
Heat input and accumulation for ultrashort pulse processing with high average power
NASA Astrophysics Data System (ADS)
Finger, Johannes; Bornschlegel, Benedikt; Reininghaus, Martin; Dohrn, Andreas; Nießen, Markus; Gillner, Arnold; Poprawe, Reinhart
2018-05-01
Materials processing using ultrashort pulsed laser radiation with pulse durations <10 ps is known to enable very precise processing with negligible thermal load. However, even for the application of picosecond and femtosecond laser radiation, not the full amount of the absorbed energy is converted into ablation products and a distinct fraction of the absorbed energy remains as residual heat in the processed workpiece. For low average power and power densities, this heat is usually not relevant for the processing results and dissipates into the workpiece. In contrast, when higher average powers and repetition rates are applied to increase the throughput and upscale ultrashort pulse processing, this heat input becomes relevant and significantly affects the achieved processing results. In this paper, we outline the relevance of heat input for ultrashort pulse processing, starting with the heat input of a single ultrashort laser pulse. Heat accumulation during ultrashort pulse processing with high repetition rate is discussed as well as heat accumulation for materials processing using pulse bursts. In addition, the relevance of heat accumulation with multiple scanning passes and processing with multiple laser spots is shown.
Heart Rate Variability Dynamics for the Prognosis of Cardiovascular Risk
Ramirez-Villegas, Juan F.; Lam-Espinosa, Eric; Ramirez-Moreno, David F.; Calvo-Echeverry, Paulo C.; Agredo-Rodriguez, Wilfredo
2011-01-01
Statistical, spectral, multi-resolution and non-linear methods were applied to heart rate variability (HRV) series linked with classification schemes for the prognosis of cardiovascular risk. A total of 90 HRV records were analyzed: 45 from healthy subjects and 45 from cardiovascular risk patients. A total of 52 features from all the analysis methods were evaluated using standard two-sample Kolmogorov-Smirnov test (KS-test). The results of the statistical procedure provided input to multi-layer perceptron (MLP) neural networks, radial basis function (RBF) neural networks and support vector machines (SVM) for data classification. These schemes showed high performances with both training and test sets and many combinations of features (with a maximum accuracy of 96.67%). Additionally, there was a strong consideration for breathing frequency as a relevant feature in the HRV analysis. PMID:21386966
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Intensive Input in Language Acquisition.
ERIC Educational Resources Information Center
Trimino, Andy; Ferguson, Nancy
This paper discusses the role of input as one of the universals in second language acquisition theory. Considerations include how language instructors can best organize and present input and when certain kinds of input are more important. A self-administered program evaluation exercise using relevant theoretical and methodological contributions…
Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices
NASA Technical Reports Server (NTRS)
Emmen, R. D.; Tischler, M. B.
1982-01-01
This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.
Production Function Geometry with "Knightian" Total Product
ERIC Educational Resources Information Center
Truett, Dale B.; Truett, Lila J.
2007-01-01
Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…
Nájera, S; Gil-Martínez, M; Zambrano, J A
2015-01-01
The aim of this paper is to establish and quantify different operational goals and control strategies in autothermal thermophilic aerobic digestion (ATAD). This technology appears as an alternative to conventional sludge digestion systems. During the batch-mode reaction, high temperatures promote sludge stabilization and pasteurization. The digester temperature is usually the only online, robust, measurable variable. The average temperature can be regulated by manipulating both the air injection and the sludge retention time. An improved performance of diverse biochemical variables can be achieved through proper manipulation of these inputs. However, a better quality of treated sludge usually implies major operating costs or a lower production rate. Thus, quality, production and cost indices are defined to quantify the outcomes of the treatment. Based on these, tradeoff control strategies are proposed and illustrated through some examples. This paper's results are relevant to guide plant operators, to design automatic control systems and to compare or evaluate the control performance on ATAD systems.
Assessing risk based on uncertain avalanche activity patterns
NASA Astrophysics Data System (ADS)
Zeidler, Antonia; Fromm, Reinhard
2015-04-01
Avalanches may affect critical infrastructure and may cause great economic losses. The planning horizon of infrastructures, e.g. hydropower generation facilities, reaches well into the future. Based on the results of previous studies on the effect of changing meteorological parameters (precipitation, temperature) and the effect on avalanche activity we assume that there will be a change of the risk pattern in future. The decision makers need to understand what the future might bring to best formulate their mitigation strategies. Therefore, we explore a commercial risk software to calculate risk for the coming years that might help in decision processes. The software @risk, is known to many larger companies, and therefore we explore its capabilities to include avalanche risk simulations in order to guarantee a comparability of different risks. In a first step, we develop a model for a hydropower generation facility that reflects the problem of changing avalanche activity patterns in future by selecting relevant input parameters and assigning likely probability distributions. The uncertain input variables include the probability of avalanches affecting an object, the vulnerability of an object, the expected costs for repairing the object and the expected cost due to interruption. The crux is to find the distribution that best represents the input variables under changing meteorological conditions. Our focus is on including the uncertain probability of avalanches based on the analysis of past avalanche data and expert knowledge. In order to explore different likely outcomes we base the analysis on three different climate scenarios (likely, worst case, baseline). For some variables, it is possible to fit a distribution to historical data, whereas in cases where the past dataset is insufficient or not available the software allows to select from over 30 different distribution types. The Monte Carlo simulation uses the probability distribution of uncertain variables using all valid combinations of the values of input variables to simulate all possible outcomes. In our case the output is the expected risk (Euro/year) for each object (e.g. water intake) considered and the entire hydropower generation system. The output is again a distribution that is interpreted by the decision makers as the final strategy depends on the needs and requirements of the end-user, which may be driven by personal preferences. In this presentation, we will show a way on how we used the uncertain information on avalanche activity in future to subsequently use it in a commercial risk software and therefore bringing the knowledge of natural hazard experts to decision makers.
Applications of information theory, genetic algorithms, and neural models to predict oil flow
NASA Astrophysics Data System (ADS)
Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto
2009-07-01
This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.
NASA Astrophysics Data System (ADS)
Virgen, Matthew Miguel
Two significant goals in solar plant operation are lower cost and higher efficiencies. To achieve those goals, a combined cycle gas turbine (CCGT) system, which uses the hot gas turbine exhaust to produce superheated steam for a bottoming Rankine cycle by way of a heat recovery steam generator (HRSG), is investigated in this work. Building off of a previous gas turbine model created at the Combustion and Solar Energy Laboratory at SDSU, here are added the HRSG and steam turbine model, which had to handle significant change in the mass flow and temperature of air exiting the gas turbine due to varying solar input. A wide range of cases were run to explore options for maximizing both power and efficiency from the proposed CSP CCGT plant. Variable guide vanes (VGVs) were found in the earlier model to be an effective tool in providing operational flexibility to address the variable nature of solar input. Combined cycle efficiencies in the range of 50% were found to result from this plant configuration. However, a combustor inlet temperature (CIT) limit leads to two distinct Modes of operation, with a sharp drop in both plant efficiency and power occurring when the air flow through the receiver exceeded the CIT limit. This drawback can be partially addressed through strategic use of the VGVs. Since system response is fully established for the relevant range of solar input and variable guide vane angles, the System Advisor Model (SAM) from NREL can be used to find what the actual expected solar input would be over the course of the day, and plan accordingly. While the SAM software is not yet equipped to model a Brayton cycle cavity receiver, appropriate approximations were made in order to produce a suitable heliostat field to fit this system. Since the SPHER uses carbon nano-particles as the solar absorbers, questions of particle longevity and how the particles might affect the flame behavior in the combustor were addressed using the chemical kinetics software ChemkinPro by modeling the combustion characteristics both with and without the particles. This work is presented in the Appendix.
The Construct of Attention in Schizophrenia
Luck, Steven J.; Gold, James M.
2008-01-01
Schizophrenia is widely thought to involve deficits of attention. However, the term attention can be defined so broadly that impaired performance on virtually any task could be construed as evidence for a deficit in attention, and this has slowed cumulative progress in understanding attention deficits in schizophrenia. To address this problem, we divide the general concept of attention into two distinct constructs: input selection, the selection of task-relevant inputs for further processing; and rule selection, the selective activation of task-appropriate rules. These constructs are closely tied to working memory, because input selection mechanisms are used to control the transfer of information into working memory and because working memory stores the rules used by rule selection mechanisms. These constructs are also closely tied to executive function, because executive systems are used to guide input selection and because rule selection is itself at key aspect of executive function. Within the domain of input selection, it is important to distinguish between the control of selection—the processes that guide attention to task-relevant inputs—and the implementation of selection—the processes that enhance the processing of the relevant inputs and suppress the irrelevant inputs. Current evidence suggests that schizophrenia involves a significant impairment in the control of selection but little or no impairment in the implementation of selection. Consequently, the CNTRICS participants agreed by consensus that attentional control should be a priority target for measurement and treatment research in schizophrenia. PMID:18374901
Nonequilibrium air radiation (Nequair) program: User's manual
NASA Technical Reports Server (NTRS)
Park, C.
1985-01-01
A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.
On framing the research question and choosing the appropriate research design.
Parfrey, Patrick S; Ravani, Pietro
2015-01-01
Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, to ensure adequate samples, to measure input and output variables appropriately, to consider external and internal validities, to limit bias, and to address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision-making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.
On framing the research question and choosing the appropriate research design.
Parfrey, Patrick; Ravani, Pietro
2009-01-01
Clinical epidemiology is the science of human disease investigation with a focus on diagnosis, prognosis, and treatment. The generation of a reasonable question requires the definition of patients, interventions, controls, and outcomes. The goal of research design is to minimize error, ensure adequate samples, measure input and output variables appropriately, consider external and internal validities, limit bias, and address clinical as well as statistical relevance. The hierarchy of evidence for clinical decision making places randomized controlled trials (RCT) or systematic review of good quality RCTs at the top of the evidence pyramid. Prognostic and etiologic questions are best addressed with longitudinal cohort studies.
Analytic uncertainty and sensitivity analysis of models with input correlations
NASA Astrophysics Data System (ADS)
Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu
2018-03-01
Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.
Harmonize input selection for sediment transport prediction
NASA Astrophysics Data System (ADS)
Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed
2017-09-01
In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.
Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya
2013-01-01
Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.
Impact of regulation on English and Welsh water-only companies: an input-distance function approach.
Molinos-Senante, María; Porcher, Simon; Maziotis, Alexandros
2017-07-01
The assessment of productivity change over time and its drivers is of great significance for water companies and regulators when setting urban water tariffs. This issue is even more relevant in privatized water industries, such as those in England and Wales, where the price-cap regulation is adopted. In this paper, an input-distance function is used to estimate productivity change and its determinants for the English and Welsh water-only companies (WoCs) over the period of 1993-2009. The impacts of several exogenous variables on companies' efficiencies are also explored. From a policy perspective, this study describes how regulators can use this type of modeling and results to calculate illustrative X factors for the WoCs. The results indicate that the 1994 and 1999 price reviews stimulated technical change, and there were small efficiency gains. However, the 2004 price review did not accelerate efficiency change or improve technical change. The results also indicated that during the whole period of study, the excessive scale of the WoCs contributed negatively to productivity growth. On average, WoCs reported relatively high efficiency levels, which suggests that they had already been investing in technologies that reduce long-term input requirements with respect to exogenous and service-quality variables. Finally, an average WoC needs to improve its productivity toward that of the best company by 1.58%. The methodology and results of this study are of great interest to both regulators and water-company managers for evaluating the effectiveness of regulation and making informed decisions.
Jackson, B Scott
2004-10-01
Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.
OpenDrift - an open source framework for ocean trajectory modeling
NASA Astrophysics Data System (ADS)
Dagestad, Knut-Frode; Breivik, Øyvind; Ådlandsvik, Bjørn
2016-04-01
We will present a new, open source tool for modeling the trajectories and fate of particles or substances (Lagrangian Elements) drifting in the ocean, or even in the atmosphere. The software is named OpenDrift, and has been developed at Norwegian Meteorological Institute in cooperation with Institute of Marine Research. OpenDrift is a generic framework written in Python, and is openly available at https://github.com/knutfrode/opendrift/. The framework is modular with respect to three aspects: (1) obtaining input data, (2) the transport/morphological processes, and (3) exporting of results to file. Modularity is achieved through well defined interfaces between components, and use of a consistent vocabulary (CF conventions) for naming of variables. Modular input implies that it is not necessary to preprocess input data (e.g. currents, wind and waves from Eulerian models) to a particular file format. Instead "reader modules" can be written/used to obtain data directly from any original source, including files or through web based protocols (e.g. OPeNDAP/Thredds). Modularity of processes implies that a model developer may focus on the geophysical processes relevant for the application of interest, without needing to consider technical tasks such as reading, reprojecting, and colocating input data, rotation and scaling of vectors and model output. We will show a few example applications of using OpenDrift for predicting drifters, oil spills, and search and rescue objects.
Estimating pesticide runoff in small streams.
Schriever, Carola A; von der Ohe, Peter C; Liess, Matthias
2007-08-01
Surface runoff is one of the most important pathways for pesticides to enter surface waters. Mathematical models are employed to characterize its spatio-temporal variability within landscapes, but they must be simple owing to the limited availability and low resolution of data at this scale. This study aimed to validate a simplified spatially-explicit model that is developed for the regional scale to calculate the runoff potential (RP). The RP is a generic indicator of the magnitude of pesticide inputs into streams via runoff. The underlying runoff model considers key environmental factors affecting runoff (precipitation, topography, land use, and soil characteristics), but predicts losses of a generic substance instead of any one pesticide. We predicted and evaluated RP for 20 small streams. RP input data were extracted from governmental databases. Pesticide measurements from a triennial study were used for validation. Measured pesticide concentrations were standardized by the applied mass per catchment and the water solubility of the relevant compounds. The maximum standardized concentration per site and year (runoff loss, R(Loss)) provided a generalized measure of observed pesticide inputs into the streams. Average RP explained 75% (p<0.001) of the variance in R(Loss). Our results imply that the generic indicator can give an adequate estimate of runoff inputs into small streams, wherever data of similar resolution are available. Therefore, we suggest RP for a first quick and cost-effective location of potential runoff hot spots at the landscape level.
Laminar Organization of Attentional Modulation in Macaque Visual Area V4.
Nandy, Anirvan S; Nassi, Jonathan J; Reynolds, John H
2017-01-04
Attention is critical to perception, serving to select behaviorally relevant information for privileged processing. To understand the neural mechanisms of attention, we must discern how attentional modulation varies by cell type and across cortical layers. Here, we test whether attention acts non-selectively across cortical layers or whether it engages the laminar circuit in specific and selective ways. We find layer- and cell-class-specific differences in several different forms of attentional modulation in area V4. Broad-spiking neurons in the superficial layers exhibit attention-mediated increases in firing rate and decreases in variability. Spike count correlations are highest in the input layer and attention serves to reduce these correlations. Superficial and input layer neurons exhibit attention-dependent decreases in low-frequency (<10 Hz) coherence, but deep layer neurons exhibit increases in coherence in the beta and gamma frequency ranges. Our study provides a template for attention-mediated laminar information processing that might be applicable across sensory modalities. Copyright © 2017 Elsevier Inc. All rights reserved.
Massot, Corentin; Chacron, Maurice J.
2011-01-01
Understanding how sensory neurons transmit information about relevant stimuli remains a major goal in neuroscience. Of particular relevance are the roles of neural variability and spike timing in neural coding. Peripheral vestibular afferents display differential variability that is correlated with the importance of spike timing; regular afferents display little variability and use a timing code to transmit information about sensory input. Irregular afferents, conversely, display greater variability and instead use a rate code. We studied how central neurons within the vestibular nuclei integrate information from both afferent classes by recording from a group of neurons termed vestibular only (VO) that are known to make contributions to vestibulospinal reflexes and project to higher-order centers. We found that, although individual central neurons had sensitivities that were greater than or equal to those of individual afferents, they transmitted less information. In addition, their velocity detection thresholds were significantly greater than those of individual afferents. This is because VO neurons display greater variability, which is detrimental to information transmission and signal detection. Combining activities from multiple VO neurons increased information transmission. However, the information rates were still much lower than those of equivalent afferent populations. Furthermore, combining responses from multiple VO neurons led to lower velocity detection threshold values approaching those measured from behavior (∼2.5 vs. 0.5–1°/s). Our results suggest that the detailed time course of vestibular stimuli encoded by afferents is not transmitted by VO neurons. Instead, they suggest that higher vestibular pathways must integrate information from central vestibular neuron populations to give rise to behaviorally observed detection thresholds. PMID:21307329
Influential input classification in probabilistic multimedia models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.
1999-05-01
Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less
Methodological development for selection of significant predictors explaining fatal road accidents.
Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco
2016-05-01
Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.
Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape
ERIC Educational Resources Information Center
Stein, Jerrold L.
2007-01-01
Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…
The Effects of a Change in the Variability of Irrigation Water
NASA Astrophysics Data System (ADS)
Lyon, Kenneth S.
1983-10-01
This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."
Optimal allocation of testing resources for statistical simulations
NASA Astrophysics Data System (ADS)
Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick
2015-07-01
Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.
Input variable selection and calibration data selection for storm water quality regression models.
Sun, Siao; Bertrand-Krajewski, Jean-Luc
2013-01-01
Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.
Influence of the UV Environment on the Synthesis of Prebiotic Molecules.
Ranjan, Sukrit; Sasselov, Dimitar D
2016-01-01
Ultraviolet radiation is common to most planetary environments and could play a key role in the chemistry of molecules relevant to abiogenesis (prebiotic chemistry). In this work, we explore the impact of UV light on prebiotic chemistry that might occur in liquid water on the surface of a planet with an atmosphere. We consider effects including atmospheric absorption, attenuation by water, and stellar variability to constrain the UV input as a function of wavelength. We conclude that the UV environment would be characterized by broadband input, and wavelengths below 204 nm and 168 nm would be shielded out by atmospheric CO2 and water, respectively. We compare this broadband prebiotic UV input to the narrowband UV sources (e.g., mercury lamps) often used in laboratory studies of prebiotic chemistry and explore the implications for the conclusions drawn from these experiments. We consider as case studies the ribonucleotide synthesis pathway of Powner et al. (2009) and the sugar synthesis pathway of Ritson and Sutherland (2012). Irradiation by narrowband UV light from a mercury lamp formed an integral component of these studies; we quantitatively explore the impact of more realistic UV input on the conclusions that can be drawn from these experiments. Finally, we explore the constraints solar UV input places on the buildup of prebiotically important feedstock gasses like CH4 and HCN. Our results demonstrate the importance of characterizing the wavelength dependence (action spectra) of prebiotic synthesis pathways to determine how pathways derived under laboratory irradiation conditions will function under planetary prebiotic conditions.
Artificial neural network model for ozone concentration estimation and Monte Carlo analysis
NASA Astrophysics Data System (ADS)
Gao, Meng; Yin, Liting; Ning, Jicai
2018-07-01
Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.
Bonnet, V; Dumas, R; Cappozzo, A; Joukov, V; Daune, G; Kulić, D; Fraisse, P; Andary, S; Venture, G
2017-09-06
This paper presents a method for real-time estimation of the kinematics and kinetics of a human body performing a sagittal symmetric motor task, which would minimize the impact of the stereophotogrammetric soft tissue artefacts (STA). The method is based on a bi-dimensional mechanical model of the locomotor apparatus the state variables of which (joint angles, velocities and accelerations, and the segments lengths and inertial parameters) are estimated by a constrained extended Kalman filter (CEKF) that fuses input information made of both stereophotogrammetric and dynamometric measurement data. Filter gains are made to saturate in order to obtain plausible state variables and the measurement covariance matrix of the filter accounts for the expected STA maximal amplitudes. We hypothesised that the ensemble of constraints and input redundant information would allow the method to attenuate the STA propagation to the end results. The method was evaluated in ten human subjects performing a squat exercise. The CEKF estimated and measured skin marker trajectories exhibited a RMS difference lower than 4mm, thus in the range of STAs. The RMS differences between the measured ground reaction force and moment and those estimated using the proposed method (9N and 10Nm) were much lower than obtained using a classical inverse dynamics approach (22N and 30Nm). From the latter results it may be inferred that the presented method allows for a significant improvement of the accuracy with which kinematic variables and relevant time derivatives, model parameters and, therefore, intersegmental moments are estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.
Somatosensory Anticipatory Alpha Activity Increases to Suppress Distracting Input
ERIC Educational Resources Information Center
Haegens, Saskia; Luther, Lisa; Jensen, Ole
2012-01-01
Effective processing of sensory input in daily life requires attentional selection and amplification of relevant input and, just as importantly, attenuation of irrelevant information. It has been proposed that top-down modulation of oscillatory alpha band activity (8-14 Hz) serves to allocate resources to various regions, depending on task…
Using transfer functions to quantify El Niño Southern Oscillation dynamics in data and models.
MacMartin, Douglas G; Tziperman, Eli
2014-09-08
Transfer function tools commonly used in engineering control analysis can be used to better understand the dynamics of El Niño Southern Oscillation (ENSO), compare data with models and identify systematic model errors. The transfer function describes the frequency-dependent input-output relationship between any pair of causally related variables, and can be estimated from time series. This can be used first to assess whether the underlying relationship is or is not frequency dependent, and if so, to diagnose the underlying differential equations that relate the variables, and hence describe the dynamics of individual subsystem processes relevant to ENSO. Estimating process parameters allows the identification of compensating model errors that may lead to a seemingly realistic simulation in spite of incorrect model physics. This tool is applied here to the TAO array ocean data, the GFDL-CM2.1 and CCSM4 general circulation models, and to the Cane-Zebiak ENSO model. The delayed oscillator description is used to motivate a few relevant processes involved in the dynamics, although any other ENSO mechanism could be used instead. We identify several differences in the processes between the models and data that may be useful for model improvement. The transfer function methodology is also useful in understanding the dynamics and evaluating models of other climate processes.
Correction of I/Q channel errors without calibration
Doerry, Armin W.; Tise, Bertice L.
2002-01-01
A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.
Hydrological characterization of Guadalquivir River Basin for the period 1980-2010 using VIC model
NASA Astrophysics Data System (ADS)
García-Valdecasas-Ojeda, Matilde; de Franciscis, Sebastiano; Raquel Gámiz-Fortis, Sonia; Castro-Díez, Yolanda; Jesús Esteban-Parra, María
2017-04-01
This study analyzes the changes of soil moisture and real evapotranspiration (ETR), during the last 30 years, in the Guadalquivir River Basin, located in the south of the Iberian Peninsula. Soil moisture content is related with the different components of the real evaporation, it is a relevant factor when analyzing the intensity of droughts and heat waves, and particularly, for the impact study of the climate change. The soil moisture and real evapotranspiration data consist of simulations obtained by using the Variable Infiltration Capacity (VIC) hydrological model. This is a large-scale hydrologic model and allows the estimations of different variables in the hydrological system of a basin. Land surface is modeled as a grid of large and uniform cells with sub-grid heterogeneity (e.g. land cover), while water influx is local, only depending from the interaction between grid cell and local atmosphere environment. Observational data of temperature and precipitation from Spain02 dataset have been used as input variables for VIC model. Additionally, estimates of actual evapotranspiration and soil moisture are also analyzed using temperature, precipitation, wind, humidity and radiation as input variables for VIC. These variables are obtained from a dynamical downscaling from ERA-Interim data by the Weather Research and Forecasting (WRF) model. The simulations have a spatial resolution about 9 km and the analysis is done on a seasonal time-scale. Preliminary results show that ETR presents very low values for autumn from WRF simulations compared with VIC simulations. Only significant positive trends are found during autumn for the western part of the basin for the ETR obtained with VIC model, meanwhile no significant trends are found for the ETR WRF simulations. Keywords: Soil moisture, Real evapotranspiration, Guadalquivir Basin, trends, VIC, WRF. Acknowledgements: This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).
Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric
2016-04-01
Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
Input Variability Facilitates Unguided Subcategory Learning in Adults
Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.
2015-01-01
Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081
Input Variability Facilitates Unguided Subcategory Learning in Adults.
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E
2015-06-01
This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.
Latin Hypercube Sampling (LHS) UNIX Library/Standalone
DOE Office of Scientific and Technical Information (OSTI.GOV)
2004-05-13
The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less
Samad, Manar D; Ulloa, Alvaro; Wehner, Gregory J; Jing, Linyuan; Hartzel, Dustin; Good, Christopher W; Williams, Brent A; Haggerty, Christopher M; Fornwalt, Brandon K
2018-06-09
The goal of this study was to use machine learning to more accurately predict survival after echocardiography. Predicting patient outcomes (e.g., survival) following echocardiography is primarily based on ejection fraction (EF) and comorbidities. However, there may be significant predictive information within additional echocardiography-derived measurements combined with clinical electronic health record data. Mortality was studied in 171,510 unselected patients who underwent 331,317 echocardiograms in a large regional health system. We investigated the predictive performance of nonlinear machine learning models compared with that of linear logistic regression models using 3 different inputs: 1) clinical variables, including 90 cardiovascular-relevant International Classification of Diseases, Tenth Revision, codes, and age, sex, height, weight, heart rate, blood pressures, low-density lipoprotein, high-density lipoprotein, and smoking; 2) clinical variables plus physician-reported EF; and 3) clinical variables and EF, plus 57 additional echocardiographic measurements. Missing data were imputed with a multivariate imputation by using a chained equations algorithm (MICE). We compared models versus each other and baseline clinical scoring systems by using a mean area under the curve (AUC) over 10 cross-validation folds and across 10 survival durations (6 to 60 months). Machine learning models achieved significantly higher prediction accuracy (all AUC >0.82) over common clinical risk scores (AUC = 0.61 to 0.79), with the nonlinear random forest models outperforming logistic regression (p < 0.01). The random forest model including all echocardiographic measurements yielded the highest prediction accuracy (p < 0.01 across all models and survival durations). Only 10 variables were needed to achieve 96% of the maximum prediction accuracy, with 6 of these variables being derived from echocardiography. Tricuspid regurgitation velocity was more predictive of survival than LVEF. In a subset of studies with complete data for the top 10 variables, multivariate imputation by chained equations yielded slightly reduced predictive accuracies (difference in AUC of 0.003) compared with the original data. Machine learning can fully utilize large combinations of disparate input variables to predict survival after echocardiography with superior accuracy. Copyright © 2018 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.
Origin of information-limiting noise correlations
Kanitscheider, Ingmar; Coen-Cagli, Ruben; Pouget, Alexandre
2015-01-01
The ability to discriminate between similar sensory stimuli relies on the amount of information encoded in sensory neuronal populations. Such information can be substantially reduced by correlated trial-to-trial variability. Noise correlations have been measured across a wide range of areas in the brain, but their origin is still far from clear. Here we show analytically and with simulations that optimal computation on inputs with limited information creates patterns of noise correlations that account for a broad range of experimental observations while at same time causing information to saturate in large neural populations. With the example of a network of V1 neurons extracting orientation from a noisy image, we illustrate to our knowledge the first generative model of noise correlations that is consistent both with neurophysiology and with behavioral thresholds, without invoking suboptimal encoding or decoding or internal sources of variability such as stochastic network dynamics or cortical state fluctuations. We further show that when information is limited at the input, both suboptimal connectivity and internal fluctuations could similarly reduce the asymptotic information, but they have qualitatively different effects on correlations leading to specific experimental predictions. Our study indicates that noise at the sensory periphery could have a major effect on cortical representations in widely studied discrimination tasks. It also provides an analytical framework to understand the functional relevance of different sources of experimentally measured correlations. PMID:26621747
Modeling road-cycling performance.
Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S
1995-04-01
This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.
The Chandra Source Catalog: Source Variability
NASA Astrophysics Data System (ADS)
Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Fabbiano, G.; Galle, E.; Gibbs, D. G.; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; Van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-01-01
The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to a preliminary assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.
The Chandra Source Catalog: Source Variability
NASA Astrophysics Data System (ADS)
Nowak, Michael; Rots, A. H.; McCollough, M. L.; Primini, F. A.; Glotfelty, K. J.; Bonaventura, N. R.; Chen, J. C.; Davis, J. E.; Doe, S. M.; Evans, J. D.; Evans, I.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hain, R.; Hall, D. M.; Harbo, P. N.; He, X.; Houck, J. C.; Karovska, M.; Lauer, J.; McDowell, J. C.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Siemiginowska, A. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.
2009-09-01
The Chandra Source Catalog (CSC) contains fields of view that have been studied with individual, uninterrupted observations that span integration times ranging from 1 ksec to 160 ksec, and a large number of which have received (multiple) repeat observations days to years later. The CSC thus offers an unprecedented look at the variability of the X-ray sky over a broad range of time scales, and across a wide diversity of variable X-ray sources: stars in the local galactic neighborhood, galactic and extragalactic X-ray binaries, Active Galactic Nuclei, etc. Here we describe the methods used to identify and quantify source variability within a single observation, and the methods used to assess the variability of a source when detected in multiple, individual observations. Three tests are used to detect source variability within a single observation: the Kolmogorov-Smirnov test and its variant, the Kuiper test, and a Bayesian approach originally suggested by Gregory and Loredo. The latter test not only provides an indicator of variability, but is also used to create a best estimate of the variable lightcurve shape. We assess the performance of these tests via simulation of statistically stationary, variable processes with arbitrary input power spectral densities (here we concentrate on results of red noise simulations) at variety of mean count rates and fractional root mean square variabilities relevant to CSC sources. We also assess the false positive rate via simulations of constant sources whose sole source of fluctuation is Poisson noise. We compare these simulations to an assessment of the variability found in real CSC sources, and estimate the variability sensitivities of the CSC.
Speed control system for an access gate
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bzorgi, Fariborz M
2012-03-20
An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less
Somatostatin-Expressing Inhibitory Interneurons in Cortical Circuits
Yavorska, Iryna; Wehr, Michael
2016-01-01
Cortical inhibitory neurons exhibit remarkable diversity in their morphology, connectivity, and synaptic properties. Here, we review the function of somatostatin-expressing (SOM) inhibitory interneurons, focusing largely on sensory cortex. SOM neurons also comprise a number of subpopulations that can be distinguished by their morphology, input and output connectivity, laminar location, firing properties, and expression of molecular markers. Several of these classes of SOM neurons show unique dynamics and characteristics, such as facilitating synapses, specific axonal projections, intralaminar input, and top-down modulation, which suggest possible computational roles. SOM cells can be differentially modulated by behavioral state depending on their class, sensory system, and behavioral paradigm. The functional effects of such modulation have been studied with optogenetic manipulation of SOM cells, which produces effects on learning and memory, task performance, and the integration of cortical activity. Different classes of SOM cells participate in distinct disinhibitory circuits with different inhibitory partners and in different cortical layers. Through these disinhibitory circuits, SOM cells help encode the behavioral relevance of sensory stimuli by regulating the activity of cortical neurons based on subcortical and intracortical modulatory input. Associative learning leads to long-term changes in the strength of connectivity of SOM cells with other neurons, often influencing the strength of inhibitory input they receive. Thus despite their heterogeneity and variability across cortical areas, current evidence shows that SOM neurons perform unique neural computations, forming not only distinct molecular but also functional subclasses of cortical inhibitory interneurons. PMID:27746722
Spatially Distributed Dendritic Resonance Selectively Filters Synaptic Input
Segev, Idan; Shamma, Shihab
2014-01-01
An important task performed by a neuron is the selection of relevant inputs from among thousands of synapses impinging on the dendritic tree. Synaptic plasticity enables this by strenghtening a subset of synapses that are, presumably, functionally relevant to the neuron. A different selection mechanism exploits the resonance of the dendritic membranes to preferentially filter synaptic inputs based on their temporal rates. A widely held view is that a neuron has one resonant frequency and thus can pass through one rate. Here we demonstrate through mathematical analyses and numerical simulations that dendritic resonance is inevitably a spatially distributed property; and therefore the resonance frequency varies along the dendrites, and thus endows neurons with a powerful spatiotemporal selection mechanism that is sensitive both to the dendritic location and the temporal structure of the incoming synaptic inputs. PMID:25144440
Aitkenhead, Matt J; Black, Helaina I J
2018-02-01
Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2 > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.
NLEdit: A generic graphical user interface for Fortran programs
NASA Technical Reports Server (NTRS)
Curlett, Brian P.
1994-01-01
NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.
Computing Shapes Of Cascade Diffuser Blades
NASA Technical Reports Server (NTRS)
Tran, Ken; Prueger, George H.
1993-01-01
Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.
Neural Network Machine Learning and Dimension Reduction for Data Visualization
NASA Technical Reports Server (NTRS)
Liles, Charles A.
2014-01-01
Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.
Development of economic and environmental metrics for forest-based biomass harvesting
NASA Astrophysics Data System (ADS)
Zhang, F. L.; Wang, J. J.; Liu, S. H.; Zhang, S. M.
2016-08-01
An assessment of the economic, energy consumption, and greenhouse gas (GHG) emission dimensions of forest-based biomass harvest stage in the state of Michigan, U.S. through gathering data from literature, database, and other relevant sources, was performed. The assessment differentiates harvesting systems (cut-to-length harvesting, whole tree harvesting, and motor-manual harvesting), harvest types (30%, 70%, and 100% cut) and forest types (hardwoods, softwoods, mixed hardwood/softwood, and softwood plantations) that characterize Michigan's logging industry. Machine rate methods were employed to determine unit harvesting cost. A life cycle inventory was applied to calculating energy demand and GHG emissions of different harvesting scenarios, considering energy and material inputs (diesel, machinery, etc.) and outputs (emissions) for each process (cutting, forwarding/skidding, etc.). A sensitivity analysis was performed for selected input variables for the harvesting operation in order to explore their relative importance. The results indicated that productivity had the largest impact on harvesting cost followed by machinery purchase price, yearly scheduled hours, and expected utilization. Productivity and fuel use, as well as fuel factors, are the most influential environmental impacts of harvesting operations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
LaFarge, R.A.
1990-05-01
MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less
Input Variability Facilitates Unguided Subcategory Learning in Adults
ERIC Educational Resources Information Center
Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.
2015-01-01
Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…
NASA Astrophysics Data System (ADS)
Schlögel, R.; Marchesini, I.; Alvioli, M.; Reichenbach, P.; Rossi, M.; Malet, J.-P.
2018-01-01
We perform landslide susceptibility zonation with slope units using three digital elevation models (DEMs) of varying spatial resolution of the Ubaye Valley (South French Alps). In so doing, we applied a recently developed algorithm automating slope unit delineation, given a number of parameters, in order to optimize simultaneously the partitioning of the terrain and the performance of a logistic regression susceptibility model. The method allowed us to obtain optimal slope units for each available DEM spatial resolution. For each resolution, we studied the susceptibility model performance by analyzing in detail the relevance of the conditioning variables. The analysis is based on landslide morphology data, considering either the whole landslide or only the source area outline as inputs. The procedure allowed us to select the most useful information, in terms of DEM spatial resolution, thematic variables and landslide inventory, in order to obtain the most reliable slope unit-based landslide susceptibility assessment.
Towards an integrated set of surface meterological observations for climate science and applications
NASA Astrophysics Data System (ADS)
Dunn, Robert; Thorne, Peter
2017-04-01
We cannot predict what is not observed, and we cannot analyse what is not archived. To meet current scientific and societal demands, as well as future requirements for climate services, it is vital that the management and curation of land-based meteorological data holdings is improved. A comprehensive global set of data holdings, of known provenance, integrated across both climate variable and timescale are required to meet the wide range of user needs. Presently, the land-based holdings are highly fractured into global, region and national holdings for different variables and timescales, from a variety of sources, and in a mixture of formats. We present a high level overview, based on broad community input, of the steps that are required to bring about this integration and progress towards such a database. Any long-term, international, program creating such an integrated database will transform the our collective ability to provide societally relevant research, analysis and predictions across the globe.
Effects of input uncertainty on cross-scale crop modeling
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter
2014-05-01
The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.
Embracing chaos and complexity: a quantum change for public health.
Resnicow, Kenneth; Page, Scott E
2008-08-01
Public health research and practice have been guided by a cognitive, rational paradigm where inputs produce linear, predictable changes in outputs. However, the conceptual and statistical assumptions underlying this paradigm may be flawed. In particular, this perspective does not adequately account for nonlinear and quantum influences on human behavior. We propose that health behavior change is better understood through the lens of chaos theory and complex adaptive systems. Key relevant principles include that behavior change (1) is often a quantum event; (2) can resemble a chaotic process that is sensitive to initial conditions, highly variable, and difficult to predict; and (3) occurs within a complex adaptive system with multiple components, where results are often greater than the sum of their parts.
Chaves, Luciano Eustáquio; Nascimento, Luiz Fernando Costa; Rizol, Paloma Maria Silva Rocha
2017-01-01
ABSTRACT OBJECTIVE Predict the number of hospitalizations for asthma and pneumonia associated with exposure to air pollutants in the city of São José dos Campos, São Paulo State. METHODS This is a computational model using fuzzy logic based on Mamdani’s inference method. For the fuzzification of the input variables of particulate matter, ozone, sulfur dioxide and apparent temperature, we considered two relevancy functions for each variable with the linguistic approach: good and bad. For the output variable number of hospitalizations for asthma and pneumonia, we considered five relevancy functions: very low, low, medium, high and very high. DATASUS was our source for the number of hospitalizations in the year 2007 and the result provided by the model was correlated with the actual data of hospitalization with lag from zero to two days. The accuracy of the model was estimated by the ROC curve for each pollutant and in those lags. RESULTS In the year of 2007, 1,710 hospitalizations by pneumonia and asthma were recorded in São José dos Campos, State of São Paulo, with a daily average of 4.9 hospitalizations (SD = 2.9). The model output data showed positive and significant correlation (r = 0.38) with the actual data; the accuracies evaluated for the model were higher for sulfur dioxide in lag 0 and 2 and for particulate matter in lag 1. CONCLUSIONS Fuzzy modeling proved accurate for the pollutant exposure effects and hospitalization for pneumonia and asthma approach. PMID:28658366
Missing pulse detector for a variable frequency source
Ingram, Charles B.; Lawhorn, John H.
1979-01-01
A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.
Input and language development in bilingually developing children.
Hoff, Erika; Core, Cynthia
2013-11-01
Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
NASA Astrophysics Data System (ADS)
Kondapalli, S. P.
2017-12-01
In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.
2017-01-01
In this study, we present a theoretical framework combining experimental characterizations and analytical calculus to capture the firing rate input-output properties of single neurons in the fluctuation-driven regime. Our framework consists of a two-step procedure to treat independently how the dendritic input translates into somatic fluctuation variables, and how the latter determine action potential firing. We use this framework to investigate the functional impact of the heterogeneity in firing responses found experimentally in young mice layer V pyramidal cells. We first design and calibrate in vitro a simplified morphological model of layer V pyramidal neurons with a dendritic tree following Rall's branching rule. Then, we propose an analytical derivation for the membrane potential fluctuations at the soma as a function of the properties of the synaptic input in dendrites. This mathematical description allows us to easily emulate various forms of synaptic input: either balanced, unbalanced, synchronized, purely proximal or purely distal synaptic activity. We find that those different forms of dendritic input activity lead to various impact on the somatic membrane potential fluctuations properties, thus raising the possibility that individual neurons will differentially couple to specific forms of activity as a result of their different firing response. We indeed found such a heterogeneous coupling between synaptic input and firing response for all types of presynaptic activity. This heterogeneity can be explained by different levels of cellular excitability in the case of the balanced, unbalanced, synchronized and purely distal activity. A notable exception appears for proximal dendritic inputs: increasing the input level can either promote firing response in some cells, or suppress it in some other cells whatever their individual excitability. This behavior can be explained by different sensitivities to the speed of the fluctuations, which was previously associated to different levels of sodium channel inactivation and density. Because local network connectivity rather targets proximal dendrites, our results suggest that this aspect of biophysical heterogeneity might be relevant to neocortical processing by controlling how individual neurons couple to local network activity. PMID:28410418
Ventricular repolarization variability for hypoglycemia detection.
Ling, Steve; Nguyen, H T
2011-01-01
Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.
The Role of Learner and Input Variables in Learning Inflectional Morphology
ERIC Educational Resources Information Center
Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel
2006-01-01
To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…
Wideband low-noise variable-gain BiCMOS transimpedance amplifier
NASA Astrophysics Data System (ADS)
Meyer, Robert G.; Mack, William D.
1994-06-01
A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Oxlade, Olivia; Pinto, Marcia; Trajman, Anete; Menzies, Dick
2013-01-01
Introduction Cost effectiveness analyses (CEA) can provide useful information on how to invest limited funds, however they are less useful if different analysis of the same intervention provide unclear or contradictory results. The objective of our study was to conduct a systematic review of methodologic aspects of CEA that evaluate Interferon Gamma Release Assays (IGRA) for the detection of Latent Tuberculosis Infection (LTBI), in order to understand how differences affect study results. Methods A systematic review of studies was conducted with particular focus on study quality and the variability in inputs used in models used to assess cost-effectiveness. A common decision analysis model of the IGRA versus Tuberculin Skin Test (TST) screening strategy was developed and used to quantify the impact on predicted results of observed differences of model inputs taken from the studies identified. Results Thirteen studies were ultimately included in the review. Several specific methodologic issues were identified across studies, including how study inputs were selected, inconsistencies in the costing approach, the utility of the QALY (Quality Adjusted Life Year) as the effectiveness outcome, and how authors choose to present and interpret study results. When the IGRA versus TST test strategies were compared using our common decision analysis model predicted effectiveness largely overlapped. Implications Many methodologic issues that contribute to inconsistent results and reduced study quality were identified in studies that assessed the cost-effectiveness of the IGRA test. More specific and relevant guidelines are needed in order to help authors standardize modelling approaches, inputs, assumptions and how results are presented and interpreted. PMID:23505412
NASA Astrophysics Data System (ADS)
Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten
2016-11-01
Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.
NASA Technical Reports Server (NTRS)
Crespo, Luis G.; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
This paper presents a study on the optimization of systems with structured uncertainties, whose inputs and outputs can be exhaustively described in the probabilistic sense. By propagating the uncertainty from the input to the output in the space of the probability density functions and the moments, optimization problems that pursue performance, robustness and reliability based designs are studied. Be specifying the desired outputs in terms of desired probability density functions and then in terms of meaningful probabilistic indices, we settle a computationally viable framework for solving practical optimization problems. Applications to static optimization and stability control are used to illustrate the relevance of incorporating uncertainty in the early stages of the design. Several examples that admit a full probabilistic description of the output in terms of the design variables and the uncertain inputs are used to elucidate the main features of the generic problem and its solution. Extensions to problems that do not admit closed form solutions are also evaluated. Concrete evidence of the importance of using a consistent probabilistic formulation of the optimization problem and a meaningful probabilistic description of its solution is provided in the examples. In the stability control problem the analysis shows that standard deterministic approaches lead to designs with high probability of running into instability. The implementation of such designs can indeed have catastrophic consequences.
Multiscale contact mechanics model for RF-MEMS switches with quantified uncertainties
NASA Astrophysics Data System (ADS)
Kim, Hojin; Huda Shaik, Nurul; Xu, Xin; Raman, Arvind; Strachan, Alejandro
2013-12-01
We introduce a multiscale model for contact mechanics between rough surfaces and apply it to characterize the force-displacement relationship for a metal-dielectric contact relevant for radio frequency micro-electromechanicl system (MEMS) switches. We propose a mesoscale model to describe the history-dependent force-displacement relationships in terms of the surface roughness, the long-range attractive interaction between the two surfaces, and the repulsive interaction between contacting asperities (including elastic and plastic deformation). The inputs to this model are the experimentally determined surface topography and the Hamaker constant as well as the mechanical response of individual asperities obtained from density functional theory calculations and large-scale molecular dynamics simulations. The model captures non-trivial processes including the hysteresis during loading and unloading due to plastic deformation, yet it is computationally efficient enough to enable extensive uncertainty quantification and sensitivity analysis. We quantify how uncertainties and variability in the input parameters, both experimental and theoretical, affect the force-displacement curves during approach and retraction. In addition, a sensitivity analysis quantifies the relative importance of the various input quantities for the prediction of force-displacement during contact closing and opening. The resulting force-displacement curves with quantified uncertainties can be directly used in device-level simulations of micro-switches and enable the incorporation of atomic and mesoscale phenomena in predictive device-scale simulations.
NASA Astrophysics Data System (ADS)
Schroth, A. W.; Crusius, J.; Kroeger, K. D.; Hoyer, I. R.; Osburn, C. L.
2010-12-01
Iron (Fe) is a micronutrient that is thought to limit phytoplankton productivity in offshore waters of the Gulf of Alaska (GoA). However, it has been proposed that in coastal regions where offshore, Fe-limited, nitrate-rich waters mix with relatively Fe-rich river plumes, productive ecosystems and fisheries result. Indeed, an observed northward increase in phytoplankton biomass along the pacific coast of North America has been attributed to higher input of riverine Fe to coastal waters, suggesting that many of the coastal ecosystems of the North Pacific rely heavily on this input of Fe as a nutrient source. Based on our studies of the Copper River (the largest point source of freshwater to the GoA) and its tributaries, it is clear that riverine Fe delivered to the GoA is primarily derived from fine glacial flour generated by glacial weathering, which imparts a unique partitioning of Fe species and Fe size fractionation in coastal river plumes. Furthermore, the distribution of Fe species and size fractionation exhibits significant seasonal and spatial variability based on the source of iron within the watershed, which varies from glacial mechanical weathering of bedrock to internal chemical processing in portions of watersheds with forest and wetland land covers. These findings are relevant to our understanding of the GoA biogeochemical system as it exists today and can help to predict how the system may evolve as glaciers within the GoA watershed continue to recede.
An inverse approach to perturb historical rainfall data for scenario-neutral climate impact studies
NASA Astrophysics Data System (ADS)
Guo, Danlu; Westra, Seth; Maier, Holger R.
2018-01-01
Scenario-neutral approaches are being used increasingly for climate impact assessments, as they allow water resource system performance to be evaluated independently of climate change projections. An important element of these approaches is the generation of perturbed series of hydrometeorological variables that form the inputs to hydrologic and water resource assessment models, with most scenario-neutral studies to-date considering only shifts in the average and a limited number of other statistics of each climate variable. In this study, a stochastic generation approach is used to perturb not only the average of the relevant hydrometeorological variables, but also attributes such as the intermittency and extremes. An optimization-based inverse approach is developed to obtain hydrometeorological time series with uniform coverage across the possible ranges of rainfall attributes (referred to as the 'exposure space'). The approach is demonstrated on a widely used rainfall generator, WGEN, for a case study at Adelaide, Australia, and is shown to be capable of producing evenly-distributed samples over the exposure space. The inverse approach expands the applicability of the scenario-neutral approach in evaluating a water resource system's sensitivity to a wider range of plausible climate change scenarios.
Partial Granger causality--eliminating exogenous inputs and latent variables.
Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng
2008-07-15
Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.
Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti
2017-12-31
Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.
System, method and apparatus for conducting a keyterm search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A keyterm search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more keyterms. Next, a gleaning model of the query is created. The gleaning model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
System, method and apparatus for conducting a phrase search
NASA Technical Reports Server (NTRS)
McGreevy, Michael W. (Inventor)
2004-01-01
A phrase search is a method of searching a database for subsets of the database that are relevant to an input query. First, a number of relational models of subsets of a database are provided. A query is then input. The query can include one or more sequences of terms. Next, a relational model of the query is created. The relational model of the query is then compared to each one of the relational models of subsets of the database. The identifiers of the relevant subsets are then output.
Dynamic modal estimation using instrumental variables
NASA Technical Reports Server (NTRS)
Salzwedel, H.
1980-01-01
A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.
Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment
ERIC Educational Resources Information Center
Alejo, Rafael; Piquer-Píriz, Ana
2016-01-01
The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…
Variable Input and the Acquisition of Plural Morphology
ERIC Educational Resources Information Center
Miller, Karen L.; Schmitt, Cristina
2012-01-01
The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…
Precision digital pulse phase generator
McEwan, T.E.
1996-10-08
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.
Precision digital pulse phase generator
McEwan, Thomas E.
1996-01-01
A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.
Regenerative braking device with rotationally mounted energy storage means
Hoppie, Lyle O.
1982-03-16
A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.
NASA Technical Reports Server (NTRS)
Carlson, C. R.
1981-01-01
The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2013 CFR
2013-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2011 CFR
2011-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2012 CFR
2012-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications
Code of Federal Regulations, 2014 CFR
2014-01-01
....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...
Schlessinger, Daniel I; Iyengar, Sanjana; Yanes, Arianna F; Chiren, Sarah G; Godinez-Puig, Victoria; Chen, Brian R; Kurta, Anastasia O; Schmitt, Jochen; Deckert, Stefanie; Furlan, Karina C; Poon, Emily; Cartee, Todd V; Maher, Ian A; Alam, Murad; Sobanko, Joseph F
2017-07-12
Squamous cell carcinoma (SCC) is a common skin cancer that poses a risk of metastasis. Clinical investigations into SCC treatment are common, but the outcomes reported are highly variable, omitted, or clinically irrelevant. The outcome heterogeneity and reporting bias of these studies leave clinicians unable to accurately compare studies. Core outcome sets (COSs) are an agreed minimum set of outcomes recommended to be measured and reported in all clinical trials of a given condition or disease. Although COSs are under development for several dermatologic conditions, work has yet to be done to identify core outcomes specific for SCC. Outcome extraction for COS generation will occur via four methods: (1) systematic literature review; (2) patient interviews; (3) other published sources; and (4) input from stakeholders in medicine, pharmacy, and other relevant industries. The list of outcomes will be revaluated by the Measuring PRiority Outcome Variables via Excellence in Dermatologic surgery (IMPROVED) Steering Committee. Delphi processes will be performed separately by expert clinicians and patients to condense the list of outcomes generated. A consensus meeting with relevant stakeholders will be conducted after the Delphi exercise to further select outcomes, taking into account participant scores. At the end of the meeting, members will vote and decide on a final recommended set of core outcomes. The Core Outcome Measures in Effectiveness Trials (COMET) organization and the Cochrane Skin Group - Core Outcome Set Initiative (CSG-COUSIN) will serve as advisers throughout the COS generation process. Comparison of clinical trials via systematic reviews and meta-analyses is facilitated when investigators study outcomes that are relevant and similar. The aim of this project is to develop a COS to guide use for future clinical trials.
Climate, soil water storage, and the average annual water balance
Milly, P.C.D.
1994-01-01
This paper describes the development and testing of the hypothesis that the long-term water balance is determined only by the local interaction of fluctuating water supply (precipitation) and demand (potential evapotranspiration), mediated by water storage in the soil. Adoption of this hypothesis, together with idealized representations of relevant input variabilities in time and space, yields a simple model of the water balance of a finite area having a uniform climate. The partitioning of average annual precipitation into evapotranspiration and runoff depends on seven dimensionless numbers: the ratio of average annual potential evapotranspiration to average annual precipitation (index of dryness); the ratio of the spatial average plant-available water-holding capacity of the soil to the annual average precipitation amount; the mean number of precipitation events per year; the shape parameter of the gamma distribution describing spatial variability of storage capacity; and simple measures of the seasonality of mean precipitation intensity, storm arrival rate, and potential evapotranspiration. The hypothesis is tested in an application of the model to the United States east of the Rocky Mountains, with no calibration. Study area averages of runoff and evapotranspiration, based on observations, are 263 mm and 728 mm, respectively; the model yields corresponding estimates of 250 mm and 741 mm, respectively, and explains 88% of the geographical variance of observed runoff within the study region. The differences between modeled and observed runoff can be explained by uncertainties in the model inputs and in the observed runoff. In the humid (index of dryness <1) parts of the study area, the dominant factor producing runoff is the excess of annual precipitation over annual potential evapotranspiration, but runoff caused by variability of supply and demand over time is also significant; in the arid (index of dryness >1) parts, all of the runoff is caused by variability of forcing over time. Contributions to model runoff attributable to small-scale spatial variability of storage capacity are insignificant throughout the study area. The consistency of the model with observational data is supportive of the supply-demand-storage hypothesis, which neglects infiltration excess runoff and other finite-permeability effects on the soil water balance.
The Effect of Visual Variability on the Learning of Academic Concepts.
Bourgoyne, Ashley; Alt, Mary
2017-06-10
The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.
Whiteway, Matthew R; Butts, Daniel A
2017-03-01
The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.
Improving Water Resources System Operation by Direct Use of Hydroclimatic Information
NASA Astrophysics Data System (ADS)
Castelletti, A.; Pianosi, F.
2011-12-01
It is generally agreed that more information translates into better decisions. For instance, the availability of inflow predictions can improve reservoir operation; soil moisture data can be exploited to increase irrigation efficiency; etc. However, beyond this general statement, many theoretical and practical questions remain open. Provided that not all information sources are equally relevant, how does their value depend on the physical features of the water system and on the purposes of the system operation? What is the minimum lead time needed for anticipatory management to be effective? How does uncertainty in the information propagates through the modelling chain from hydroclimatic data through descriptive and decision models, and finally affect the decision? Is the data-predictions-decision paradigm truly effective or would it be better to directly use hydroclimatic data to take optimal decisions, skipping the intermediate step of hydrological forecasting? In this work we investigate these issues by application to the management of a complex water system in Northern Vietnam, characterized by multiple, conflicting objectives including hydropower production, flood control and water supply. First, we quantify the value of hydroclimatic information as the improvement in the system performances that could be attained under the (ideal) assumption of perfect knowledge of all future meteorological and hydrological input. Then, we assess and compare the relevance of different candidate information (meteorological or hydrological observations; ground or remote data; etc.) for the purpose of system operation by novel Input Variable Selection techniques. Finally, we evaluate the performance improvement made possible by the use of such information in re-designing the system operation.
Not All Children Agree: Acquisition of Agreement when the Input Is Variable
ERIC Educational Resources Information Center
Miller, Karen
2012-01-01
In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…
Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883
Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun
2016-01-01
This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.
Design and analysis of composite structures with stress concentrations
NASA Technical Reports Server (NTRS)
Garbo, S. P.
1983-01-01
An overview of an analytic procedure which can be used to provide comprehensive stress and strength analysis of composite structures with stress concentrations is given. The methodology provides designer/analysts with a user-oriented procedure which, within acceptable engineering accuracy, accounts for the effects of a wide range of application design variables. The procedure permits the strength of arbitrary laminate constructions under general bearing/bypass load conditions to be predicted with only unnotched unidirectional strength and stiffness input data required. Included is a brief discussion of the relevancy of this analysis to the design of primary aircraft structure; an overview of the analytic procedure with theory/test correlations; and an example of the use and interaction of this strength analysis relative to the design of high-load transfer bolted composite joints.
Structurally Dynamic Spin Market Networks
NASA Astrophysics Data System (ADS)
Horváth, Denis; Kuscsik, Zoltán
The agent-based model of stock price dynamics on a directed evolving complex network is suggested and studied by direct simulation. The stationary regime is maintained as a result of the balance between the extremal dynamics, adaptivity of strategic variables and reconnection rules. The inherent structure of node agent "brain" is modeled by a recursive neural network with local and global inputs and feedback connections. For specific parametric combination the complex network displays small-world phenomenon combined with scale-free behavior. The identification of a local leader (network hub, agent whose strategies are frequently adapted by its neighbors) is carried out by repeated random walk process through network. The simulations show empirically relevant dynamics of price returns and volatility clustering. The additional emerging aspects of stylized market statistics are Zipfian distributions of fitness.
NASA Astrophysics Data System (ADS)
Abiriand Bhekisipho Twala, Olufunminiyi
2017-08-01
In this paper, a multilayer feedforward neural network with Bayesian regularization constitutive model is developed for alloy 316L during high strain rate and high temperature plastic deformation. The input variables are strain rate, temperature and strain while the output value is the flow stress of the material. The results show that the use of Bayesian regularized technique reduces the potential of overfitting and overtraining. The prediction quality of the model is thereby improved. The model predictions are in good agreement with experimental measurements. The measurement data used for the network training and model comparison were taken from relevant literature. The developed model is robust as it can be generalized to deformation conditions slightly below or above the training dataset.
Semi-Automated Annotation of Biobank Data Using Standard Medical Terminologies in a Graph Database.
Hofer, Philipp; Neururer, Sabrina; Goebel, Georg
2016-01-01
Data describing biobank resources frequently contains unstructured free-text information or insufficient coding standards. (Bio-) medical ontologies like Orphanet Rare Diseases Ontology (ORDO) or the Human Disease Ontology (DOID) provide a high number of concepts, synonyms and entity relationship properties. Such standard terminologies increase quality and granularity of input data by adding comprehensive semantic background knowledge from validated entity relationships. Moreover, cross-references between terminology concepts facilitate data integration across databases using different coding standards. In order to encourage the use of standard terminologies, our aim is to identify and link relevant concepts with free-text diagnosis inputs within a biobank registry. Relevant concepts are selected automatically by lexical matching and SPARQL queries against a RDF triplestore. To ensure correctness of annotations, proposed concepts have to be confirmed by medical data administration experts before they are entered into the registry database. Relevant (bio-) medical terminologies describing diseases and phenotypes were identified and stored in a graph database which was tied to a local biobank registry. Concept recommendations during data input trigger a structured description of medical data and facilitate data linkage between heterogeneous systems.
Miller, Brian W.; Symstad, Amy J.; Frid, Leonardo; Fisichelli, Nicholas A.; Schuurman, Gregor W.
2017-01-01
Simulation models can represent complexities of the real world and serve as virtual laboratories for asking “what if…?” questions about how systems might respond to different scenarios. However, simulation models have limited relevance to real-world applications when designed without input from people who could use the simulated scenarios to inform their decisions. Here, we report on a state-and-transition simulation model of vegetation dynamics that was coupled to a scenario planning process and co-produced by researchers, resource managers, local subject-matter experts, and climate change adaptation specialists to explore potential effects of climate scenarios and management alternatives on key resources in southwest South Dakota. Input from management partners and local experts was critical for representing key vegetation types, bison and cattle grazing, exotic plants, fire, and the effects of climate change and management on rangeland productivity and composition given the paucity of published data on many of these topics. By simulating multiple land management jurisdictions, climate scenarios, and management alternatives, the model highlighted important tradeoffs between grazer density and vegetation composition, as well as between the short- and long-term costs of invasive species management. It also pointed to impactful uncertainties related to the effects of fire and grazing on vegetation. More broadly, a scenario-based approach to model co-production bracketed the uncertainty associated with climate change and ensured that the most important (and impactful) uncertainties related to resource management were addressed. This cooperative study demonstrates six opportunities for scientists to engage users throughout the modeling process to improve model utility and relevance: (1) identifying focal dynamics and variables, (2) developing conceptual model(s), (3) parameterizing the simulation, (4) identifying relevant climate scenarios and management alternatives, (5) evaluating and refining the simulation, and (6) interpreting the results. We also reflect on lessons learned and offer several recommendations for future co-production efforts, with the aim of advancing the pursuit of usable science.
INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE
Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval
2008-01-01
SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077
A priori discretization quality metrics for distributed hydrologic modeling applications
NASA Astrophysics Data System (ADS)
Liu, Hongli; Tolson, Bryan; Craig, James; Shafii, Mahyar; Basu, Nandita
2016-04-01
In distributed hydrologic modelling, a watershed is treated as a set of small homogeneous units that address the spatial heterogeneity of the watershed being simulated. The ability of models to reproduce observed spatial patterns firstly depends on the spatial discretization, which is the process of defining homogeneous units in the form of grid cells, subwatersheds, or hydrologic response units etc. It is common for hydrologic modelling studies to simply adopt a nominal or default discretization strategy without formally assessing alternative discretization levels. This approach lacks formal justifications and is thus problematic. More formalized discretization strategies are either a priori or a posteriori with respect to building and running a hydrologic simulation model. A posteriori approaches tend to be ad-hoc and compare model calibration and/or validation performance under various watershed discretizations. The construction and calibration of multiple versions of a distributed model can become a seriously limiting computational burden. Current a priori approaches are more formalized and compare overall heterogeneity statistics of dominant variables between candidate discretization schemes and input data or reference zones. While a priori approaches are efficient and do not require running a hydrologic model, they do not fully investigate the internal spatial pattern changes of variables of interest. Furthermore, the existing a priori approaches focus on landscape and soil data and do not assess impacts of discretization on stream channel definition even though its significance has been noted by numerous studies. The primary goals of this study are to (1) introduce new a priori discretization quality metrics considering the spatial pattern changes of model input data; (2) introduce a two-step discretization decision-making approach to compress extreme errors and meet user-specified discretization expectations through non-uniform discretization threshold modification. The metrics for the first time provides quantification of the routing relevant information loss due to discretization according to the relationship between in-channel routing length and flow velocity. Moreover, it identifies and counts the spatial pattern changes of dominant hydrological variables by overlaying candidate discretization schemes upon input data and accumulating variable changes in area-weighted way. The metrics are straightforward and applicable to any semi-distributed or fully distributed hydrological model with grid scales are greater than input data resolutions. The discretization metrics and decision-making approach are applied to the Grand River watershed located in southwestern Ontario, Canada where discretization decisions are required for a semi-distributed modelling application. Results show that discretization induced information loss monotonically increases as discretization gets rougher. With regards to routing information loss in subbasin discretization, multiple interesting points rather than just the watershed outlet should be considered. Moreover, subbasin and HRU discretization decisions should not be considered independently since subbasin input significantly influences the complexity of HRU discretization result. Finally, results show that the common and convenient approach of making uniform discretization decisions across the watershed domain performs worse compared to a metric informed non-uniform discretization approach as the later since is able to conserve more watershed heterogeneity under the same model complexity (number of computational units).
Thomas, R.E.
1959-01-20
An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses
Higher order visual input to the mushroom bodies in the bee, Bombus impatiens.
Paulk, Angelique C; Gronenberg, Wulfila
2008-11-01
To produce appropriate behaviors based on biologically relevant associations, sensory pathways conveying different modalities are integrated by higher-order central brain structures, such as insect mushroom bodies. To address this function of sensory integration, we characterized the structure and response of optic lobe (OL) neurons projecting to the calyces of the mushroom bodies in bees. Bees are well known for their visual learning and memory capabilities and their brains possess major direct visual input from the optic lobes to the mushroom bodies. To functionally characterize these visual inputs to the mushroom bodies, we recorded intracellularly from neurons in bumblebees (Apidae: Bombus impatiens) and a single neuron in a honeybee (Apidae: Apis mellifera) while presenting color and motion stimuli. All of the mushroom body input neurons were color sensitive while a subset was motion sensitive. Additionally, most of the mushroom body input neurons would respond to the first, but not to subsequent, presentations of repeated stimuli. In general, the medulla or lobula neurons projecting to the calyx signaled specific chromatic, temporal, and motion features of the visual world to the mushroom bodies, which included sensory information required for the biologically relevant associations bees form during foraging tasks.
Hand placement near the visual stimulus improves orientation selectivity in V2 neurons
Sergio, Lauren E.; Crawford, J. Douglas; Fallah, Mazyar
2015-01-01
Often, the brain receives more sensory input than it can process simultaneously. Spatial attention helps overcome this limitation by preferentially processing input from a behaviorally-relevant location. Recent neuropsychological and psychophysical studies suggest that attention is deployed to near-hand space much like how the oculomotor system can deploy attention to an upcoming gaze position. Here we provide the first neuronal evidence that the presence of a nearby hand enhances orientation selectivity in early visual processing area V2. When the hand was placed outside the receptive field, responses to the preferred orientation were significantly enhanced without a corresponding significant increase at the orthogonal orientation. Consequently, there was also a significant sharpening of orientation tuning. In addition, the presence of the hand reduced neuronal response variability. These results indicate that attention is automatically deployed to the space around a hand, improving orientation selectivity. Importantly, this appears to be optimal for motor control of the hand, as opposed to oculomotor mechanisms which enhance responses without sharpening orientation selectivity. Effector-based mechanisms for visual enhancement thus support not only the spatiotemporal dissociation of gaze and reach, but also the optimization of vision for their separate requirements for guiding movements. PMID:25717165
Prediction of surface distress using neural networks
NASA Astrophysics Data System (ADS)
Hamdi, Hadiwardoyo, Sigit P.; Correia, A. Gomes; Pereira, Paulo; Cortez, Paulo
2017-06-01
Road infrastructures contribute to a healthy economy throughout a sustainable distribution of goods and services. A road network requires appropriately programmed maintenance treatments in order to keep roads assets in good condition, providing maximum safety for road users under a cost-effective approach. Surface Distress is the key element to identify road condition and may be generated by many different factors. In this paper, a new approach is aimed to predict Surface Distress Index (SDI) values following a data-driven approach. Later this model will be accordingly applied by using data obtained from the Integrated Road Management System (IRMS) database. Artificial Neural Networks (ANNs) are used to predict SDI index using input variables related to the surface of distress, i.e., crack area and width, pothole, rutting, patching and depression. The achieved results show that ANN is able to predict SDI with high correlation factor (R2 = 0.996%). Moreover, a sensitivity analysis was applied to the ANN model, revealing the influence of the most relevant input parameters for SDI prediction, namely rutting (59.8%), crack width (29.9%) and crack area (5.0%), patching (3.0%), pothole (1.7%) and depression (0.3%).
Rispoli, Matthew; Holt, Janet K.
2017-01-01
Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819
Wesolowski, Edwin A.
1996-01-01
Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.
Kernel-PCA data integration with enhanced interpretability
2014-01-01
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747
NASA Astrophysics Data System (ADS)
Forsythe, N.; Blenkinsop, S.; Fowler, H. J.
2015-05-01
A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.
NASA Astrophysics Data System (ADS)
Hao, Wenrui; Lu, Zhenzhou; Li, Luyi
2013-05-01
In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.
Nitrogen (N) inputs to the landscape have been linked previously to N loads exported from watersheds at the national scale; however, stream N concentration is arguably more relevant than N load for drinking water quality, freshwater biological responses and establishment of nutri...
39 CFR 3010.12 - Contents of notice of rate adjustment.
Code of Federal Regulations, 2014 CFR
2014-07-01
... information must be supported by workpapers in which all calculations are shown and all input values, including all relevant CPI-U values, are listed with citations to the original sources. (2) A schedule... input values, including current rates, new rates, and billing determinants, are listed with citations to...
Schizophrenia: A Cognitive Model and Its Implications for Psychological Intervention.
ERIC Educational Resources Information Center
Hemsley, David R.
1996-01-01
Proposes a cognitive model of schizophrenia stating that schizophrenic behavior is caused by a disturbance in sensory input and stored material integration. Cites research to support this model. Outlines the manner in which a disturbance in sensory input integration relates to schizophrenic symptoms and discusses the model's relevance for…
Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program
NASA Technical Reports Server (NTRS)
Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.
1981-01-01
The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.
The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...
Multiplexer and time duration measuring circuit
Gray, Jr., James
1980-01-01
A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.
Moreno-González, R; Rodríguez-Mozaz, S; Gros, M; Pérez-Cánovas, E; Barceló, D; León, V M
2014-08-15
The seasonal occurrence and distribution of 69 pharmaceuticals along coastal watercourses during 6 sampling campaigns and their input through El Albujón watercourse to the Mar Menor lagoon were determined by UPLC-MS-MS, considering a total of 115 water samples. The major source of pharmaceuticals running into this watercourse was an effluent from the Los Alcazares WWTP, although other sources were also present (runoffs, excess water from irrigation, etc.). In this urban and agriculturally influenced watercourse different pharmaceutical distribution profiles were detected according to their attenuation, which depended on physicochemical water conditions, pollutant input variation, biodegradation and photodegradation rates of pollutants, etc. The less recalcitrant compounds in this study (macrolides, β-blockers, etc.) showed a relevant seasonal variability as a consequence of dissipation processes (degradation, sorption, etc.). Attenuation was lower, however, for diclofenac, carbamazepine, lorazepam, valsartan, sulfamethoxazole among others, due to their known lower degradability and sorption onto particulate matter, according to previous studies. The maximum concentrations detected were higher than 1000 ng L(-1) for azithromycin, clarithromycin, valsartan, acetaminophen and ibuprofen. These high concentration levels were favored by the limited dilution in this low flow system, and consequently some of them could pose an acute risk to the biota of this watercourse. Considering data from 2009 to 2010, it has been estimated that a total of 11.3 kg of pharmaceuticals access the Mar Menor lagoon annually through the El Albujón watercourse. The highest proportion of this input corresponded to antibiotics (46%), followed by antihypertensives (20%) and diuretics (18%). Copyright © 2014 Elsevier B.V. All rights reserved.
The impact of 14-nm photomask uncertainties on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-04-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.
Prediction of problematic wine fermentations using artificial neural networks.
Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra
2011-11-01
Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.
Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz
2014-01-01
The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
NASA Astrophysics Data System (ADS)
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-04-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.
Perceptual organization of speech signals by children with and without dyslexia
Nittrouer, Susan; Lowenstein, Joanna H.
2013-01-01
Developmental dyslexia is a condition in which children encounter difficulty learning to read in spite of adequate instruction. Although considerable effort has been expended trying to identify the source of the problem, no single solution has been agreed upon. The current study explored a new hypothesis, that developmental dyslexia may be due to faulty perceptual organization of linguistically relevant sensory input. To test that idea, sentence-length speech signals were processed to create either sine-wave or noise-vocoded analogs. Seventy children between 8 and 11 years of age, with and without dyslexia participated. Children with dyslexia were selected to have phonological awareness deficits, although those without such deficits were retained in the study. The processed sentences were presented for recognition, and measures of reading, phonological awareness, and expressive vocabulary were collected. Results showed that children with dyslexia, regardless of phonological subtype, had poorer recognition scores than children without dyslexia for both kinds of degraded sentences. Older children with dyslexia recognized the sine-wave sentences better than younger children with dyslexia, but no such effect of age was found for the vocoded materials. Recognition scores were used as predictor variables in regression analyses with reading, phonological awareness, and vocabulary measures used as dependent variables. Scores for both sorts of sentence materials were strong predictors of performance on all three dependent measures when all children were included, but only performance for the sine-wave materials explained significant proportions of variance when only children with dyslexia were included. Finally, matching young, typical readers with older children with dyslexia on reading abilities did not mitigate the group difference in recognition of vocoded sentences. Conclusions were that children with dyslexia have difficulty organizing linguistically relevant sensory input, but learn to do so for the structure preserved by sine-wave signals before they do so for other sorts of signal structure. These perceptual organization deficits could account for difficulties acquiring refined linguistic representations, including those of a phonological nature, although ramifications are different across affected children. PMID:23702597
Perceptual organization of speech signals by children with and without dyslexia.
Nittrouer, Susan; Lowenstein, Joanna H
2013-08-01
Developmental dyslexia is a condition in which children encounter difficulty learning to read in spite of adequate instruction. Although considerable effort has been expended trying to identify the source of the problem, no single solution has been agreed upon. The current study explored a new hypothesis, that developmental dyslexia may be due to faulty perceptual organization of linguistically relevant sensory input. To test that idea, sentence-length speech signals were processed to create either sine-wave or noise-vocoded analogs. Seventy children between 8 and 11 years of age, with and without dyslexia participated. Children with dyslexia were selected to have phonological awareness deficits, although those without such deficits were retained in the study. The processed sentences were presented for recognition, and measures of reading, phonological awareness, and expressive vocabulary were collected. Results showed that children with dyslexia, regardless of phonological subtype, had poorer recognition scores than children without dyslexia for both kinds of degraded sentences. Older children with dyslexia recognized the sine-wave sentences better than younger children with dyslexia, but no such effect of age was found for the vocoded materials. Recognition scores were used as predictor variables in regression analyses with reading, phonological awareness, and vocabulary measures used as dependent variables. Scores for both sorts of sentence materials were strong predictors of performance on all three dependent measures when all children were included, but only performance for the sine-wave materials explained significant proportions of variance when only children with dyslexia were included. Finally, matching young, typical readers with older children with dyslexia on reading abilities did not mitigate the group difference in recognition of vocoded sentences. Conclusions were that children with dyslexia have difficulty organizing linguistically relevant sensory input, but learn to do so for the structure preserved by sine-wave signals before they do so for other sorts of signal structure. These perceptual organization deficits could account for difficulties acquiring refined linguistic representations, including those of a phonological nature, although ramifications are different across affected children. Copyright © 2013 Elsevier Ltd. All rights reserved.
Two-Stage Variable Sample-Rate Conversion System
NASA Technical Reports Server (NTRS)
Tkacenko, Andre
2009-01-01
A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.
Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli
2017-10-01
Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1 yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Luk, K. C.; Ball, J. E.; Sharma, A.
2000-01-01
Artificial neural networks (ANNs), which emulate the parallel distributed processing of the human nervous system, have proven to be very successful in dealing with complicated problems, such as function approximation and pattern recognition. Due to their powerful capability and functionality, ANNs provide an alternative approach for many engineering problems that are difficult to solve by conventional approaches. Rainfall forecasting has been a difficult subject in hydrology due to the complexity of the physical processes involved and the variability of rainfall in space and time. In this study, ANNs were adopted to forecast short-term rainfall for an urban catchment. The ANNs were trained to recognise historical rainfall patterns as recorded from a number of gauges in the study catchment for reproduction of relevant patterns for new rainstorm events. The primary objective of this paper is to investigate the effect of temporal and spatial information on short-term rainfall forecasting. To achieve this aim, a comparison test on the forecast accuracy was made among the ANNs configured with different orders of lag and different numbers of spatial inputs. In developing the ANNs with alternative configurations, the ANNs were trained to an optimal level to achieve good generalisation of data. It was found in this study that the ANNs provided the most accurate predictions when an optimum number of spatial inputs was included into the network, and that the network with lower lag consistently produced better performance.
Renton, David A.; Mushet, David M.; DeKeyser, Edward S.
2015-01-01
Prairie pothole wetlands offer crucial habitat for North America’s waterfowl populations. The wetlands also support an abundance of other species and provide ecological services valued by society. The hydrology of prairie pothole wetlands is dependent on atmospheric interactions. Therefore, changes to the region’s climate can have profound effects on wetland hydrology. The relevant literature related to climate change and upland management effects on prairie pothole wetland water levels and hydroperiods was reviewed. Climate change is widely expected to affect water levels and hydroperiods of prairie pothole wetlands, as well as the biota and ecological services that the wetlands support. In general, hydrologic model projections that incorporate future climate change scenarios forecast lower water levels in prairie pothole wetlands and longer periods spent in a dry condition, despite potential increases in precipitation. However, the extreme natural variability in climate and hydrology of prairie pothole wetlands necessitates caution when interpreting model results. Recent changes in weather patterns throughout much of the Prairie Pothole Region have been in increased precipitation that results in increased water inputs to wetlands above losses associated with warmer temperatures. However, observed precipitation increases are within the range of natural climate variability and therefore, may not persist. Identifying management techniques with the potential to affect water inputs to prairie pothole wetlands would provide increased options for managers when dealing with the uncertainties associated with a changing climate. Several grassland management techniques (for example, grazing and burning) have the potential to affect water levels and hydroperiods of prairie pothole by affecting infiltration, evapotranspiration, and snow deposition.
NASA Astrophysics Data System (ADS)
Rovinelli, Andrea; Guilhem, Yoann; Proudhon, Henry; Lebensohn, Ricardo A.; Ludwig, Wolfgang; Sangid, Michael D.
2017-06-01
Microstructurally small cracks exhibit large variability in their fatigue crack growth rate. It is accepted that the inherent variability in microstructural features is related to the uncertainty in the growth rate. However, due to (i) the lack of cycle-by-cycle experimental data, (ii) the complexity of the short crack growth phenomenon, and (iii) the incomplete physics of constitutive relationships, only empirical damage metrics have been postulated to describe the short crack driving force metric (SCDFM) at the mesoscale level. The identification of the SCDFM of polycrystalline engineering alloys is a critical need, in order to achieve more reliable fatigue life prediction and improve material design. In this work, the first steps in the development of a general probabilistic framework are presented, which uses experimental result as an input, retrieves missing experimental data through crystal plasticity (CP) simulations, and extracts correlations utilizing machine learning and Bayesian networks (BNs). More precisely, experimental results representing cycle-by-cycle data of a short crack growing through a beta-metastable titanium alloy, VST-55531, have been acquired via phase and diffraction contrast tomography. These results serve as an input for FFT-based CP simulations, which provide the micromechanical fields influenced by the presence of the crack, complementing the information available from the experiment. In order to assess the correlation between postulated SCDFM and experimental observations, the data is mined and analyzed utilizing BNs. Results show the ability of the framework to autonomously capture relevant correlations and the equivalence in the prediction capability of different postulated SCDFMs for the high cycle fatigue regime.
Innovations in Basic Flight Training for the Indonesian Air Force
1990-12-01
microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook
A Bayesian approach to model structural error and input variability in groundwater modeling
NASA Astrophysics Data System (ADS)
Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.
2015-12-01
Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.
NASA Astrophysics Data System (ADS)
Zounemat-Kermani, Mohammad
2012-08-01
In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.
NASA Technical Reports Server (NTRS)
Meyn, Larry A.
2018-01-01
One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use
Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics
Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter
2010-01-01
Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575
Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan
2009-01-01
Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.
GREAT: a gradient-based color-sampling scheme for Retinex.
Lecca, Michela; Rizzi, Alessandro; Serapioni, Raul Paolo
2017-04-01
Modeling the local color spatial distribution is a crucial step for the algorithms of the Milano Retinex family. Here we present GREAT, a novel, noise-free Milano Retinex implementation based on an image-aware spatial color sampling. For each channel of a color input image, GREAT computes a 2D set of edges whose magnitude exceeds a pre-defined threshold. Then GREAT re-scales the channel intensity of each image pixel, called target, by the average of the intensities of the selected edges weighted by a function of their positions, gradient magnitudes, and intensities relative to the target. In this way, GREAT enhances the input image, adjusting its brightness, contrast and dynamic range. The use of the edges as pixels relevant to color filtering is justified by the importance that edges play in human color sensation. The name GREAT comes from the expression "Gradient RElevAnce for ReTinex," which refers to the threshold-based definition of a gradient relevance map for edge selection and thus for image color filtering.
ERIC Educational Resources Information Center
Passow, Susanne; Müller, Maike; Westerhausen, René; Hugdahl, Kenneth; Wartenburger, Isabell; Heekeren, Hauke R.; Lindenberger, Ulman; Li, Shu-Chen
2013-01-01
Multitalker situations confront listeners with a plethora of competing auditory inputs, and hence require selective attention to relevant information, especially when the perceptual saliency of distracting inputs is high. This study augmented the classical forced-attention dichotic listening paradigm by adding an interaural intensity manipulation…
Aumentado-Armstrong, Tristan; Metzen, Michael G; Sproule, Michael K J; Chacron, Maurice J
2015-10-01
Neurons that respond selectively but in an invariant manner to a given feature of natural stimuli have been observed across species and systems. Such responses emerge in higher brain areas, thereby suggesting that they occur by integrating afferent input. However, the mechanisms by which such integration occurs are poorly understood. Here we show that midbrain electrosensory neurons can respond selectively and in an invariant manner to heterogeneity in behaviorally relevant stimulus waveforms. Such invariant responses were not seen in hindbrain electrosensory neurons providing afferent input to these midbrain neurons, suggesting that response invariance results from nonlinear integration of such input. To test this hypothesis, we built a model based on the Hodgkin-Huxley formalism that received realistic afferent input. We found that multiple combinations of parameter values could give rise to invariant responses matching those seen experimentally. Our model thus shows that there are multiple solutions towards achieving invariant responses and reveals how subthreshold membrane conductances help promote robust and invariant firing in response to heterogeneous stimulus waveforms associated with behaviorally relevant stimuli. We discuss the implications of our findings for the electrosensory and other systems.
NASA Astrophysics Data System (ADS)
Guan, Jun; Xu, Xiaoyu; Xing, Lizhi
2018-03-01
The input-output table is comprehensive and detailed in describing national economic systems with abundance of economic relationships depicting information of supply and demand among industrial sectors. This paper focuses on how to quantify the degree of competition on the global value chain (GVC) from the perspective of econophysics. Global Industrial Strongest Relevant Network models are established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output (ICIO) tables and then have them transformed into Global Industrial Resource Competition Network models to analyze the competitive relationships based on bibliographic coupling approach. Three indicators well suited for the weighted and undirected networks with self-loops are introduced here, including unit weight for competitive power, disparity in the weight for competitive amplitude and weighted clustering coefficient for competitive intensity. Finally, these models and indicators were further applied empirically to analyze the function of industrial sectors on the basis of the latest World Input-Output Database (WIOD) in order to reveal inter-sector competitive status during the economic globalization.
Variable input observer for state estimation of high-rate dynamics
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob
2017-04-01
High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.
T-ray relevant frequencies for osteosarcoma classification
NASA Astrophysics Data System (ADS)
Withayachumnankul, W.; Ferguson, B.; Rainsford, T.; Findlay, D.; Mickan, S. P.; Abbott, D.
2006-01-01
We investigate the classification of the T-ray response of normal human bone cells and human osteosarcoma cells, grown in culture. Given the magnitude and phase responses within a reliable spectral range as features for input vectors, a trained support vector machine can correctly classify the two cell types to some extent. Performance of the support vector machine is deteriorated by the curse of dimensionality, resulting from the comparatively large number of features in the input vectors. Feature subset selection methods are used to select only an optimal number of relevant features for inputs. As a result, an improvement in generalization performance is attainable, and the selected frequencies can be used for further describing different mechanisms of the cells, responding to T-rays. We demonstrate a consistent classification accuracy of 89.6%, while the only one fifth of the original features are retained in the data set.
Neuroprosthetics and the science of patient input
Civillico, Eugene F.
2017-01-01
Safe and effective neuroprosthetic systems are of great interest to both DARPA and CDRH, due to their innovative nature and their potential to aid severely disabled populations. By expanding what is possible in human-device interaction, these devices introduce new potential benefits and risks. Therefore patient input, which is increasingly important in weighing benefits and risks, is particularly relevant for this class of devices. FDA has been a significant contributor to an ongoing stakeholder conversation about the inclusion of the patient voice, working collaboratively to create a new framework for a patient-centered approach to medical device development. This framework is evolving through open dialogue with researcher and patient communities, investment in the science of patient input, and policymaking that is responsive to patient-centered data throughout the total product life cycle. In this commentary, we will discuss recent developments in patient-centered benefit-risk assessment and their relevance to the development of neural prosthetic systems. PMID:27456271
Neuroprosthetics and the science of patient input.
Benz, Heather L; Civillico, Eugene F
2017-01-01
Safe and effective neuroprosthetic systems are of great interest to both DARPA and CDRH, due to their innovative nature and their potential to aid severely disabled populations. By expanding what is possible in human-device interaction, these devices introduce new potential benefits and risks. Therefore patient input, which is increasingly important in weighing benefits and risks, is particularly relevant for this class of devices. FDA has been a significant contributor to an ongoing stakeholder conversation about the inclusion of the patient voice, working collaboratively to create a new framework for a patient-centered approach to medical device development. This framework is evolving through open dialogue with researcher and patient communities, investment in the science of patient input, and policymaking that is responsive to patient-centered data throughout the total product life cycle. In this commentary, we will discuss recent developments in patient-centered benefit-risk assessment and their relevance to the development of neural prosthetic systems. Published by Elsevier Inc.
Fleming, Brandon J.; LaMotte, Andrew E.; Sekellick, Andrew J.
2013-01-01
Hydrogeologic regions in the fractured rock area of Maryland were classified using geographic information system tools with principal components and cluster analyses. A study area consisting of the 8-digit Hydrologic Unit Code (HUC) watersheds with rivers that flow through the fractured rock area of Maryland and bounded by the Fall Line was further subdivided into 21,431 catchments from the National Hydrography Dataset Plus. The catchments were then used as a common hydrologic unit to compile relevant climatic, topographic, and geologic variables. A principal components analysis was performed on 10 input variables, and 4 principal components that accounted for 83 percent of the variability in the original data were identified. A subsequent cluster analysis grouped the catchments based on four principal component scores into six hydrogeologic regions. Two crystalline rock hydrogeologic regions, including large parts of the Washington, D.C. and Baltimore metropolitan regions that represent over 50 percent of the fractured rock area of Maryland, are distinguished by differences in recharge, Precipitation minus Potential Evapotranspiration, sand content in soils, and groundwater contributions to streams. This classification system will provide a georeferenced digital hydrogeologic framework for future investigations of groundwater availability in the fractured rock area of Maryland.
Dooley, Christopher J; Tenore, Francesco V; Gayzik, F Scott; Merkle, Andrew C
2018-04-27
Biological tissue testing is inherently susceptible to the wide range of variability specimen to specimen. A primary resource for encapsulating this range of variability is the biofidelity response corridor or BRC. In the field of injury biomechanics, BRCs are often used for development and validation of both physical, such as anthropomorphic test devices, and computational models. For the purpose of generating corridors, post-mortem human surrogates were tested across a range of loading conditions relevant to under-body blast events. To sufficiently cover the wide range of input conditions, a relatively small number of tests were performed across a large spread of conditions. The high volume of required testing called for leveraging the capabilities of multiple impact test facilities, all with slight variations in test devices. A method for assessing similitude of responses between test devices was created as a metric for inclusion of a response in the resulting BRC. The goal of this method was to supply a statistically sound, objective method to assess the similitude of an individual response against a set of responses to ensure that the BRC created from the set was affected primarily by biological variability, not anomalies or differences stemming from test devices. Copyright © 2018 Elsevier Ltd. All rights reserved.
Hu, Qinglei
2007-10-01
This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.
A waste characterisation procedure for ADM1 implementation based on degradation kinetics.
Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F
2012-09-01
In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.
Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.
Marino, Dale J; Starr, Thomas B
2007-12-01
A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.
Critical dynamics on a large human Open Connectome network
NASA Astrophysics Data System (ADS)
Ódor, Géza
2016-12-01
Extended numerical simulations of threshold models have been performed on a human brain network with N =836 733 connected nodes available from the Open Connectome Project. While in the case of simple threshold models a sharp discontinuous phase transition without any critical dynamics arises, variable threshold models exhibit extended power-law scaling regions. This is attributed to fact that Griffiths effects, stemming from the topological or interaction heterogeneity of the network, can become relevant if the input sensitivity of nodes is equalized. I have studied the effects of link directness, as well as the consequence of inhibitory connections. Nonuniversal power-law avalanche size and time distributions have been found with exponents agreeing with the values obtained in electrode experiments of the human brain. The dynamical critical region occurs in an extended control parameter space without the assumption of self-organized criticality.
Mercury Content of Sediments in East Fork Poplar Creek: Current Assessment and Past Trends
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brooks, Scott C.; Eller, Virginia A.; Dickson, Johnbull O.
2017-01-01
This study provided new information on sediment mercury (Hg) and monomethylmercury (MMHg) content and chemistry. The current inventory of Hg in East Fork Poplar Creek (EFPC) bed sediments was estimated to be 334 kg, which represents a ~67% decrease relative to the initial investigations in 1984. MMHg sediment inventory was estimated to be 44.1 g, lower but roughly similar to past estimates. The results support the relevance and potential impacts of other active and planned investigations within the Mercury Remediation Technology Development for Lower East Fork Poplar Creek project (e.g., assessment and control of bank soil inputs, sorbents for Hgmore » and MMHg removal, re-introduction of freshwater clams to EFPC), and identify gaps in current understanding that represent opportunities to understand controlling variables that may inform future technology development studies.« less
Variable Delay Element For Jitter Control In High Speed Data Links
Livolsi, Robert R.
2002-06-11
A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.
A liquid lens switching-based motionless variable fiber-optic delay line
NASA Astrophysics Data System (ADS)
Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz
2018-05-01
We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.
Sensitivity analysis and nonlinearity assessment of steam cracking furnace process
NASA Astrophysics Data System (ADS)
Rosli, M. N.; Sudibyo, Aziz, N.
2017-11-01
In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.
1984-11-01
model, all linked to input-output flows from the relevant sector to all of the eleven sectors. They form an important analytical tool for gauging the...with very little in the way of intermediate inputs. However, most of coal is used up in intermediate inputs, as fuel for steel and other industrial...machine tools be developed even more rapidly as a domestic resource, rather than being imported? What about agricultural chemicals? China is unlikely
Acknowledging patient heterogeneity in economic evaluation : a systematic literature review.
Grutters, Janneke P C; Sculpher, Mark; Briggs, Andrew H; Severens, Johan L; Candel, Math J; Stahl, James E; De Ruysscher, Dirk; Boer, Albert; Ramaekers, Bram L T; Joore, Manuela A
2013-02-01
Patient heterogeneity is the part of variability that can be explained by certain patient characteristics (e.g. age, disease stage). Population reimbursement decisions that acknowledge patient heterogeneity could potentially save money and increase population health. To date, however, economic evaluations pay only limited attention to patient heterogeneity. The objective of the present paper is to provide a comprehensive overview of the current knowledge regarding patient heterogeneity within economic evaluation of healthcare programmes. A systematic literature review was performed to identify methodological papers on the topic of patient heterogeneity in economic evaluation. Data were obtained using a keyword search of the PubMed database and manual searches. Handbooks were also included. Relevant data were extracted regarding potential sources of patient heterogeneity, in which of the input parameters of an economic evaluation these occur, methods to acknowledge patient heterogeneity and specific concerns associated with this acknowledgement. A total of 20 articles and five handbooks were included. The relevant sources of patient heterogeneity (demographics, preferences and clinical characteristics) and the input parameters where they occurred (baseline risk, treatment effect, health state utility and resource utilization) were combined in a framework. Methods were derived for the design, analysis and presentation phases of an economic evaluation. Concerns related mainly to the danger of false-positive results and equity issues. By systematically reviewing current knowledge regarding patient heterogeneity within economic evaluations of healthcare programmes, we provide guidance for future economic evaluations. Guidance is provided on which sources of patient heterogeneity to consider, how to acknowledge them in economic evaluation and potential concerns. The improved acknowledgement of patient heterogeneity in future economic evaluations may well improve the efficiency of healthcare.
Pérez-Izquierdo, Leticia; Zabal-Aguirre, Mario; Flores-Rentería, Dulce; González-Martínez, Santiago C; Buée, Marc; Rincón, Ana
2017-04-01
Fungi provide relevant ecosystem services contributing to primary productivity and the cycling of nutrients in forests. These fungal inputs can be decisive for the resilience of Mediterranean forests under global change scenarios, making necessary an in-deep knowledge about how fungal communities operate in these ecosystems. By using high-throughput sequencing and enzymatic approaches, we studied the fungal communities associated with three genotypic variants of Pinus pinaster trees, in 45-year-old common garden plantations. We aimed to determine the impact of biotic (i.e., tree genotype) and abiotic (i.e., season, site) factors on the fungal community structure, and to explore whether structural shifts triggered functional responses affecting relevant ecosystem processes. Tree genotype and spatial-temporal factors were pivotal structuring fungal communities, mainly by influencing their assemblage and selecting certain fungi. Diversity variations of total fungal community and of that of specific fungal guilds, together with edaphic properties and tree's productivity, explained relevant ecosystem services such as processes involved in carbon turnover and phosphorous mobilization. A mechanistic model integrating relations of these variables and ecosystem functional outcomes is provided. Our results highlight the importance of structural shifts in fungal communities because they may have functional consequences for key ecosystem processes in Mediterranean forests. © 2017 Society for Applied Microbiology and John Wiley and Sons Ltd.
Construct Relevant and Irrelevant Variables in Math Problem Solving Assessment
ERIC Educational Resources Information Center
Birk, Lisa E.
2013-01-01
In this study, I examined the relation between various construct relevant and irrelevant variables and a math problem solving assessment. I used independent performance measures representing the variables of mathematics content knowledge, general ability, and reading fluency. Non-performance variables included gender, socioeconomic status,…
A Sensitivity Analysis Method to Study the Behavior of Complex Process-based Models
NASA Astrophysics Data System (ADS)
Brugnach, M.; Neilson, R.; Bolte, J.
2001-12-01
The use of process-based models as a tool for scientific inquiry is becoming increasingly relevant in ecosystem studies. Process-based models are artificial constructs that simulate the system by mechanistically mimicking the functioning of its component processes. Structurally, a process-based model can be characterized, in terms of its processes and the relationships established among them. Each process comprises a set of functional relationships among several model components (e.g., state variables, parameters and input data). While not encoded explicitly, the dynamics of the model emerge from this set of components and interactions organized in terms of processes. It is the task of the modeler to guarantee that the dynamics generated are appropriate and semantically equivalent to the phenomena being modeled. Despite the availability of techniques to characterize and understand model behavior, they do not suffice to completely and easily understand how a complex process-based model operates. For example, sensitivity analysis studies model behavior by determining the rate of change in model output as parameters or input data are varied. One of the problems with this approach is that it considers the model as a "black box", and it focuses on explaining model behavior by analyzing the relationship input-output. Since, these models have a high degree of non-linearity, understanding how the input affects an output can be an extremely difficult task. Operationally, the application of this technique may constitute a challenging task because complex process-based models are generally characterized by a large parameter space. In order to overcome some of these difficulties, we propose a method of sensitivity analysis to be applicable to complex process-based models. This method focuses sensitivity analysis at the process level, and it aims to determine how sensitive the model output is to variations in the processes. Once the processes that exert the major influence in the output are identified, the causes of its variability can be found. Some of the advantages of this approach are that it reduces the dimensionality of the search space, it facilitates the interpretation of the results and it provides information that allows exploration of uncertainty at the process level, and how it might affect model output. We present an example using the vegetation model BIOME-BGC.
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Code of Federal Regulations, 2014 CFR
2014-07-01
... the updated factors and input data sets from the supporting data systems used, including: (1) The In... Determination. (b) The CRA report, including relevant data on international mail services; (c) The Cost Segments and Components (CSC) report; (d) All input data and processing programs used to produce the CRA report...
Code of Federal Regulations, 2011 CFR
2011-07-01
... the updated factors and input data sets from the supporting data systems used, including: (1) The In... Determination. (b) The CRA report, including relevant data on international mail services; (c) The Cost Segments and Components (CSC) report; (d) All input data and processing programs used to produce the CRA report...
Code of Federal Regulations, 2013 CFR
2013-07-01
... the updated factors and input data sets from the supporting data systems used, including: (1) The In... Determination. (b) The CRA report, including relevant data on international mail services; (c) The Cost Segments and Components (CSC) report; (d) All input data and processing programs used to produce the CRA report...
Code of Federal Regulations, 2012 CFR
2012-07-01
... the updated factors and input data sets from the supporting data systems used, including: (1) The In... Determination. (b) The CRA report, including relevant data on international mail services; (c) The Cost Segments and Components (CSC) report; (d) All input data and processing programs used to produce the CRA report...
Some Thoughts on the Matter of Self-Determination and Will.
ERIC Educational Resources Information Center
Deci, Edward L.
"Will" is defined in this paper as the capacity to decide how to behave based on a processing of relevant information. A sequence of motivated behavior begins with informational inputs or stimuli. These come from three sources: the environment, one's physiology, and one's memory. These inputs lead to the formation of motives or awareness of a…
Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA
Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.
2018-01-01
Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.
Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C
2010-01-01
Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.
Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system
NASA Astrophysics Data System (ADS)
Adamatzky, Andrew
2015-09-01
In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.
UWB delay and multiply receiver
Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.
2013-09-10
An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.
Cold Environment Assessment Tool (CEAT) User’s Guide for Apple Mobile Devices
2015-06-01
discussion of mobile Android device relevance to the military see, “Android Smartphone Relevance to Military Weather Applications”.5 2. CEAT Inputs To...eating, resting, sleeping , clerical work • Low: Walking, marching without rucksack, drill and ceremony • High: Digging foxhole, running, marching...2013. 5. Sauter, D. Android smartphone relevance to military weather applications. White Sands Missile Range (NM): Army Research Laboratory (US
Relevant, irredundant feature selection and noisy example elimination.
Lashkia, George V; Anthony, Laurence
2004-04-01
In many real-world situations, the method for computing the desired output from a set of inputs is unknown. One strategy for solving these types of problems is to learn the input-output functionality from examples in a training set. However, in many situations it is difficult to know what information is relevant to the task at hand. Subsequently, researchers have investigated ways to deal with the so-called problem of consistency of attributes, i.e., attributes that can distinguish examples from different classes. In this paper, we first prove that the notion of relevance of attributes is directly related to the consistency of attributes, and show how relevant, irredundant attributes can be selected. We then compare different relevant attribute selection algorithms, and show the superiority of algorithms that select irredundant attributes over those that select relevant attributes. We also show that searching for an "optimal" subset of attributes, which is considered to be the main purpose of attribute selection, is not the best way to improve the accuracy of classifiers. Employing sets of relevant, irredundant attributes improves classification accuracy in many more cases. Finally, we propose a new method for selecting relevant examples, which is based on filtering the so-called pattern frequency domain. By identifying examples that are nontypical in the determination of relevant, irredundant attributes, irrelevant examples can be eliminated prior to the learning process. Empirical results using artificial and real databases show the effectiveness of the proposed method in selecting relevant examples leading to improved performance even on greatly reduced training sets.
Alpha1 LASSO data bundles Lamont, OK
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)
2016-08-03
A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
A Multivariate Analysis of the Early Dropout Process
ERIC Educational Resources Information Center
Fiester, Alan R.; Rudestam, Kjell E.
1975-01-01
Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)
Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.
2002-01-01
An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
Variable ratio regenerative braking device
Hoppie, Lyle O.
1981-12-15
Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.
NASA Technical Reports Server (NTRS)
Chen, B. M.; Saber, A.
1993-01-01
A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.
NASA Astrophysics Data System (ADS)
Yadav, D.; Upadhyay, H. C.
1992-07-01
Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.
Nonlinear Dynamic Models in Advanced Life Support
NASA Technical Reports Server (NTRS)
Jones, Harry
2002-01-01
To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.
Predicting language outcomes for children learning AAC: Child and environmental factors
Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris
2014-01-01
Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.
Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat
2013-01-01
The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172
To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.
Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke
2013-01-01
Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.
Interacting with notebook input devices: an analysis of motor performance and users' expertise.
Sutter, Christine; Ziefle, Martina
2005-01-01
In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.
Sanders, Matthew R; Kirby, James N
2012-06-01
A consumer perspective can contribute much to enhancing the "ecological fit" of population-level parenting interventions so they meet the needs of parents. This approach involves building relationships with consumer groups and soliciting consumer input into the relevance and acceptability of interventions, clarifying the enablers and barriers to engagement and involvement of parents, and clarifying variables that influence a parent's program completion. The adoption of a more collaborative approach to working with consumers is important if meaningful population-level change in the prevalence of serious social, emotional, and behavioral problems in children and young people is to be achieved. Parents seeking assistance for their children's behavior come from a diverse range of socioeconomic backgrounds, educational levels, cultures, and languages. This paper examines consumer engagement strategies that can be employed throughout the process of program development, evaluation, training, and dissemination, and in "scaling up" the intervention. We argue that a multilevel public health approach to parenting intervention requires a strong consumer perspective to enable interventions to be more responsive to the preferences and needs of families and to ensure improved population reach of interventions. Examples from large-scale dissemination trials are used to illustrate how consumer input can result in an increasingly differentiated suite of evidence-based parenting programs. Copyright © 2011. Published by Elsevier Ltd.
Sanders, Matthew R.; Kirby, James N.
2013-01-01
A consumer perspective can contribute much to enhancing the “ecological fit” of population level parenting interventions so they meet the needs of parents. This approach involves building relationships with consumer groups and soliciting consumer input into the relevance and acceptability of interventions, clarifying the enablers and barriers to engagement and involvement of parents, and clarifying variables that influence a parent’s program completion. The adoption of a more collaborative approach to working with consumers is important if meaningful population level change in the prevalence of serious social, emotional and behavioral problems in children and young people is to be achieved. Parents seeking assistance for their children’s behavior come from a diverse range of socioeconomic backgrounds, educational levels, cultures and languages. This paper examines consumer engagement strategies that can be employed throughout the process of program development, evaluation, training and dissemination and in “scaling up” the intervention. We argue that a multi-level public health approach to parenting intervention requires a strong consumer perspective to enable interventions to be more responsive to the preferences and needs of families and to ensure improved population reach of interventions. Examples from large scale dissemination trials are used to illustrate how consumer input can result in an increasingly differentiated suite of evidence-based parenting programs. PMID:22440062
Fractional cable model for signal conduction in spiny neuronal dendrites
NASA Astrophysics Data System (ADS)
Vitali, Silvia; Mainardi, Francesco
2017-06-01
The cable model is widely used in several fields of science to describe the propagation of signals. A relevant medical and biological example is the anomalous subdiffusion in spiny neuronal dendrites observed in several studies of the last decade. Anomalous subdiffusion can be modelled in several ways introducing some fractional component into the classical cable model. The Chauchy problem associated to these kind of models has been investigated by many authors, but up to our knowledge an explicit solution for the signalling problem has not yet been published. Here we propose how this solution can be derived applying the generalized convolution theorem (known as Efros theorem) for Laplace transforms. The fractional cable model considered in this paper is defined by replacing the first order time derivative with a fractional derivative of order α ∈ (0, 1) of Caputo type. The signalling problem is solved for any input function applied to the accessible end of a semi-infinite cable, which satisfies the requirements of the Efros theorem. The solutions corresponding to the simple cases of impulsive and step inputs are explicitly calculated in integral form containing Wright functions. Thanks to the variability of the parameter α, the corresponding solutions are expected to adapt to the qualitative behaviour of the membrane potential observed in experiments better than in the standard case α = 1.
Statistical Analysis of 30 Years Rainfall Data: A Case Study
NASA Astrophysics Data System (ADS)
Arvind, G.; Ashok Kumar, P.; Girish Karthi, S.; Suribabu, C. R.
2017-07-01
Rainfall is a prime input for various engineering design such as hydraulic structures, bridges and culverts, canals, storm water sewer and road drainage system. The detailed statistical analysis of each region is essential to estimate the relevant input value for design and analysis of engineering structures and also for crop planning. A rain gauge station located closely in Trichy district is selected for statistical analysis where agriculture is the prime occupation. The daily rainfall data for a period of 30 years is used to understand normal rainfall, deficit rainfall, Excess rainfall and Seasonal rainfall of the selected circle headquarters. Further various plotting position formulae available is used to evaluate return period of monthly, seasonally and annual rainfall. This analysis will provide useful information for water resources planner, farmers and urban engineers to assess the availability of water and create the storage accordingly. The mean, standard deviation and coefficient of variation of monthly and annual rainfall was calculated to check the rainfall variability. From the calculated results, the rainfall pattern is found to be erratic. The best fit probability distribution was identified based on the minimum deviation between actual and estimated values. The scientific results and the analysis paved the way to determine the proper onset and withdrawal of monsoon results which were used for land preparation and sowing.
A European model and case studies for aggregate exposure assessment of pesticides.
Kennedy, Marc C; Glass, C Richard; Bokkers, Bas; Hart, Andy D M; Hamey, Paul Y; Kruisselbrink, Johannes W; de Boer, Waldo J; van der Voet, Hilko; Garthwaite, David G; van Klaveren, Jacob D
2015-05-01
Exposures to plant protection products (PPPs) are assessed using risk analysis methods to protect public health. Traditionally, single sources, such as food or individual occupational sources, have been addressed. In reality, individuals can be exposed simultaneously to multiple sources. Improved regulation therefore requires the development of new tools for estimating the population distribution of exposures aggregated within an individual. A new aggregate model is described, which allows individual users to include as much, or as little, information as is available or relevant for their particular scenario. Depending on the inputs provided by the user, the outputs can range from simple deterministic values through to probabilistic analyses including characterisations of variability and uncertainty. Exposures can be calculated for multiple compounds, routes and sources of exposure. The aggregate model links to the cumulative dietary exposure model developed in parallel and is implemented in the web-based software tool MCRA. Case studies are presented to illustrate the potential of this model, with inputs drawn from existing European data sources and models. These cover exposures to UK arable spray operators, Italian vineyard spray operators, Netherlands users of a consumer spray and UK bystanders/residents. The model could also be adapted to handle non-PPP compounds. Crown Copyright © 2014. Published by Elsevier Ltd. All rights reserved.
Automatic design of basin-specific drought indexes for highly regulated water systems
NASA Astrophysics Data System (ADS)
Zaniolo, Marta; Giuliani, Matteo; Castelletti, Andrea Francesco; Pulido-Velazquez, Manuel
2018-04-01
Socio-economic costs of drought are progressively increasing worldwide due to undergoing alterations of hydro-meteorological regimes induced by climate change. Although drought management is largely studied in the literature, traditional drought indexes often fail at detecting critical events in highly regulated systems, where natural water availability is conditioned by the operation of water infrastructures such as dams, diversions, and pumping wells. Here, ad hoc index formulations are usually adopted based on empirical combinations of several, supposed-to-be significant, hydro-meteorological variables. These customized formulations, however, while effective in the design basin, can hardly be generalized and transferred to different contexts. In this study, we contribute FRIDA (FRamework for Index-based Drought Analysis), a novel framework for the automatic design of basin-customized drought indexes. In contrast to ad hoc empirical approaches, FRIDA is fully automated, generalizable, and portable across different basins. FRIDA builds an index representing a surrogate of the drought conditions of the basin, computed by combining all the relevant available information about the water circulating in the system identified by means of a feature extraction algorithm. We used the Wrapper for Quasi-Equally Informative Subset Selection (W-QEISS), which features a multi-objective evolutionary algorithm to find Pareto-efficient subsets of variables by maximizing the wrapper accuracy, minimizing the number of selected variables, and optimizing relevance and redundancy of the subset. The preferred variable subset is selected among the efficient solutions and used to formulate the final index according to alternative model structures. We apply FRIDA to the case study of the Jucar river basin (Spain), a drought-prone and highly regulated Mediterranean water resource system, where an advanced drought management plan relying on the formulation of an ad hoc state index
is used for triggering drought management measures. The state index was constructed empirically with a trial-and-error process begun in the 1980s and finalized in 2007, guided by the experts from the Confederación Hidrográfica del Júcar (CHJ). Our results show that the automated variable selection outcomes align with CHJ's 25-year-long empirical refinement. In addition, the resultant FRIDA index outperforms the official State Index in terms of accuracy in reproducing the target variable and cardinality of the selected inputs set.
Growth and yield model application in tropical rain forest management
James Atta-Boateng; John W., Jr. Moser
2000-01-01
Analytical tools are needed to evaluate the impact of management policies on the sustainable use of rain forest. Optimal decisions concerning the level of management inputs require accurate predictions of output at all relevant input levels. Using growth data from 40 l-hectare permanent plots obtained from the semi-deciduous forest of Ghana, a system of 77 differential...
ERIC Educational Resources Information Center
Campfield, Dorota E.; Murphy, Victoria A.
2017-01-01
This paper reports on an intervention study with young Polish beginners (mean age: 8 years, 3 months) learning English at school. It seeks to identify whether exposure to rhythmic input improves knowledge of word order and function words. The "prosodic bootstrapping hypothesis", relevant in developmental psycholinguistics, provided the…
The southern forest futures project: using public input to define the issues
D.N. Wear; J.G. Greis; N. Walters
2009-01-01
The Southern Forest Futures Project has been designed to evaluate the implications of potential futures for the many goods and services forests in the Southeastern United States provide. To ensure that the Futures Project is comprehensive and relevant, we have begun with a thorough scoping of issues using a process that elicits input from various interested publics. We...
2017-01-01
The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization. PMID:28873432
A Reward-Maximizing Spiking Neuron as a Bounded Rational Decision Maker.
Leibfried, Felix; Braun, Daniel A
2015-08-01
Rate distortion theory describes how to communicate relevant information most efficiently over a channel with limited capacity. One of the many applications of rate distortion theory is bounded rational decision making, where decision makers are modeled as information channels that transform sensory input into motor output under the constraint that their channel capacity is limited. Such a bounded rational decision maker can be thought to optimize an objective function that trades off the decision maker's utility or cumulative reward against the information processing cost measured by the mutual information between sensory input and motor output. In this study, we interpret a spiking neuron as a bounded rational decision maker that aims to maximize its expected reward under the computational constraint that the mutual information between the neuron's input and output is upper bounded. This abstract computational constraint translates into a penalization of the deviation between the neuron's instantaneous and average firing behavior. We derive a synaptic weight update rule for such a rate distortion optimizing neuron and show in simulations that the neuron efficiently extracts reward-relevant information from the input by trading off its synaptic strengths against the collected reward.
Xing, Lizhi
2017-01-01
The input-output table is comprehensive and detailed in describing the national economic system with complex economic relationships, which embodies information of supply and demand among industrial sectors. This paper aims to scale the degree of competition/collaboration on the global value chain from the perspective of econophysics. Global Industrial Strongest Relevant Network models were established by extracting the strongest and most immediate industrial relevance in the global economic system with inter-country input-output tables and then transformed into Global Industrial Resource Competition Network/Global Industrial Production Collaboration Network models embodying the competitive/collaborative relationships based on bibliographic coupling/co-citation approach. Three indicators well suited for these two kinds of weighted and non-directed networks with self-loops were introduced, including unit weight for competitive/collaborative power, disparity in the weight for competitive/collaborative amplitude and weighted clustering coefficient for competitive/collaborative intensity. Finally, these models and indicators were further applied to empirically analyze the function of sectors in the latest World Input-Output Database, to reveal inter-sector competitive/collaborative status during the economic globalization.
Using a Bayesian network to predict barrier island geomorphologic characteristics
Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron
2015-01-01
Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.
Utilisation of Local Inputs in the Funding and Administration of Education in Nigeria
ERIC Educational Resources Information Center
Akiri, Agharuwhe A.
2014-01-01
The article discussed how, why and who is in charge of administering and funding schools in Nigeria. The author utilised the relevant statistical approach which examined and discussed various political and historical trends affecting education. Besides this, relevant documented statistical data were used to both buttress and substantiate related…
Symbolic PathFinder: Symbolic Execution of Java Bytecode
NASA Technical Reports Server (NTRS)
Pasareanu, Corina S.; Rungta, Neha
2010-01-01
Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.
NASA Astrophysics Data System (ADS)
Greco, R.; Sorriso-Valvo, M.
2013-09-01
Several authors, according to different methodological approaches, have employed logistic Regression (LR), a multivariate statistical analysis adopted to assess the spatial probability of landslide, even though its fundamental principles have remained unaltered. This study aims at assessing the influence of some of these methodological approaches on the performance of LR, through a series of sensitivity analyses developed over a test area of about 300 km2 in Calabria (southern Italy). In particular, four types of sampling (1 - the whole study area; 2 - transects running parallel to the general slope direction of the study area with a total surface of about 1/3 of the whole study area; 3 - buffers surrounding the phenomena with a 1/1 ratio between the stable and the unstable area; 4 - buffers surrounding the phenomena with a 1/2 ratio between the stable and the unstable area), two variable coding modes (1 - grouped variables; 2 - binary variables), and two types of elementary land (1 - cells units; 2 - slope units) units have been tested. The obtained results must be considered as statistically relevant in all cases (Aroc values > 70%), thus confirming the soundness of the LR analysis which maintains high predictive capacities notwithstanding the features of input data. As for the area under investigation, the best performing methodological choices are the following: (i) transects produced the best results (0 < P(y) ≤ 93.4%; Aroc = 79.5%); (ii) as for sampling modalities, binary variables (0 < P(y) ≤ 98.3%; Aroc = 80.7%) provide better performance than ordinated variables; (iii) as for the choice of elementary land units, slope units (0 < P(y) ≤ 100%; Aroc = 84.2%) have obtained better results than cells matrix.
Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
Predicting Spike Occurrence and Neuronal Responsiveness from LFPs in Primary Somatosensory Cortex
Storchi, Riccardo; Zippo, Antonio G.; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E. M.
2012-01-01
Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role. PMID:22586452
NASA Astrophysics Data System (ADS)
Shamkhali Chenar, S.; Deng, Z.
2017-12-01
Pathogenic viruses pose a significant public health threat and economic losses to shellfish industry in the coastal environment. Norovirus is a contagious virus and the leading cause of epidemic gastroenteritis following consumption of oysters harvested from sewage-contaminated waters. While it is challenging to detect noroviruses in coastal waters due to the lack of sensitive and routine diagnostic methods, machine learning techniques are allowing us to prevent or at least reduce the risks by developing effective predictive models. This study attempts to develop an algorithm between historical norovirus outbreak reports and environmental parameters including water temperature, solar radiation, water level, salinity, precipitation, and wind. For this purpose, the Random Forests statistical technique was utilized to select relevant environmental parameters and their various combinations with different time lags controlling the virus distribution in oyster harvesting areas along the Louisiana Coast. An Artificial Neural Networks (ANN) approach was then presented to predict the outbreaks using a final set of input variables. Finally, a sensitivity analysis was conducted to evaluate relative importance and contribution of the input variables to the model output. Findings demonstrated that the developed model was capable of reproducing historical oyster norovirus outbreaks along the Louisiana Coast with the overall accuracy of than 99.83%, demonstrating the efficacy of the model. Moreover, the increase in water temperature, solar radiation, water level, and salinity, and the decrease in wind and rainfall are associated with the reduction in the model-predicted risk of norovirus outbreak according to sensitivity analysis results. In conclusion, the presented machine learning approach provided reliable tools for predicting potential norovirus outbreaks and could be used for early detection of possible outbreaks and reduce the risk of norovirus to public health and the seafood industry.
Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205
Predicting spike occurrence and neuronal responsiveness from LFPs in primary somatosensory cortex.
Storchi, Riccardo; Zippo, Antonio G; Caramenti, Gian Carlo; Valente, Maurizio; Biella, Gabriele E M
2012-01-01
Local Field Potentials (LFPs) integrate multiple neuronal events like synaptic inputs and intracellular potentials. LFP spatiotemporal features are particularly relevant in view of their applications both in research (e.g. for understanding brain rhythms, inter-areal neural communication and neuronal coding) and in the clinics (e.g. for improving invasive Brain-Machine Interface devices). However the relation between LFPs and spikes is complex and not fully understood. As spikes represent the fundamental currency of neuronal communication this gap in knowledge strongly limits our comprehension of neuronal phenomena underlying LFPs. We investigated the LFP-spike relation during tactile stimulation in primary somatosensory (S-I) cortex in the rat. First we quantified how reliably LFPs and spikes code for a stimulus occurrence. Then we used the information obtained from our analyses to design a predictive model for spike occurrence based on LFP inputs. The model was endowed with a flexible meta-structure whose exact form, both in parameters and structure, was estimated by using a multi-objective optimization strategy. Our method provided a set of nonlinear simple equations that maximized the match between models and true neurons in terms of spike timings and Peri Stimulus Time Histograms. We found that both LFPs and spikes can code for stimulus occurrence with millisecond precision, showing, however, high variability. Spike patterns were predicted significantly above chance for 75% of the neurons analysed. Crucially, the level of prediction accuracy depended on the reliability in coding for the stimulus occurrence. The best predictions were obtained when both spikes and LFPs were highly responsive to the stimuli. Spike reliability is known to depend on neuron intrinsic properties (i.e. on channel noise) and on spontaneous local network fluctuations. Our results suggest that the latter, measured through the LFP response variability, play a dominant role.
Thrust Chamber Modeling Using Navier-Stokes Equations: Code Documentation and Listings. Volume 2
NASA Technical Reports Server (NTRS)
Daley, P. L.; Owens, S. F.
1988-01-01
A copy of the PHOENICS input files and FORTRAN code developed for the modeling of thrust chambers is given. These copies are contained in the Appendices. The listings are contained in Appendices A through E. Appendix A describes the input statements relevant to thrust chamber modeling as well as the FORTRAN code developed for the Satellite program. Appendix B describes the FORTRAN code developed for the Ground program. Appendices C through E contain copies of the Q1 (input) file, the Satellite program, and the Ground program respectively.
Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes
NASA Astrophysics Data System (ADS)
Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping
2017-01-01
Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.
Implicit and explicit forgetting: when is gist remembered?
Dorfman, J; Mandler, G
1994-08-01
Recognition (YES/NO) and stem completion (cued: complete with a word from the list; and uncued: complete with the first word that comes to mind) were tested following either semantic or non-semantic processing of a categorized input list. Item/instance information was tested by contrasting target items from the input list with new items that were categorically related to them; gist/categorical information was tested by comparing target items semantically related to the input items with unrelated new items. For both recognition and stem completion, regardless of initial processing condition, item information decayed rapidly over a period of one week. Gist information was maintained over the same period when initial processing was semantic but only in the cued condition for completion. These results are discussed in terms of dual process theory, which postulates activation/integration of a representation as primarily relevant to implicit item information and elaboration of a representation as mainly relevant to semantic (i.e. categorical) information.
Wu, Xianhua; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27–1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30–1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries. PMID:24578666
Wu, Xianhua; Wei, Guo; Yang, Lingjuan; Guo, Ji; Lu, Huaguo; Chen, Yunfeng; Sun, Jian
2014-01-01
Concentrating on consuming coefficient, partition coefficient, and Leontief inverse matrix, relevant concepts and algorithms are developed for estimating the impact of meteorological services including the associated (indirect, complete) economic effect. Subsequently, quantitative estimations are particularly obtained for the meteorological services in Jiangxi province by utilizing the input-output method. It is found that the economic effects are noticeably rescued by the preventive strategies developed from both the meteorological information and internal relevance (interdependency) in the industrial economic system. Another finding is that the ratio range of input in the complete economic effect on meteorological services is about 1 : 108.27-1 : 183.06, remarkably different from a previous estimation based on the Delphi method (1 : 30-1 : 51). Particularly, economic effects of meteorological services are higher for nontraditional users of manufacturing, wholesale and retail trades, services sector, tourism and culture, and art and lower for traditional users of agriculture, forestry, livestock, fishery, and construction industries.
Astrobiological complexity with probabilistic cellular automata.
Vukotić, Branislav; Ćirković, Milan M
2012-08-01
The search for extraterrestrial life and intelligence constitutes one of the major endeavors in science, but has yet been quantitatively modeled only rarely and in a cursory and superficial fashion. We argue that probabilistic cellular automata (PCA) represent the best quantitative framework for modeling the astrobiological history of the Milky Way and its Galactic Habitable Zone. The relevant astrobiological parameters are to be modeled as the elements of the input probability matrix for the PCA kernel. With the underlying simplicity of the cellular automata constructs, this approach enables a quick analysis of large and ambiguous space of the input parameters. We perform a simple clustering analysis of typical astrobiological histories with "Copernican" choice of input parameters and discuss the relevant boundary conditions of practical importance for planning and guiding empirical astrobiological and SETI projects. In addition to showing how the present framework is adaptable to more complex situations and updated observational databases from current and near-future space missions, we demonstrate how numerical results could offer a cautious rationale for continuation of practical SETI searches.
Balderas-Díaz, Sara; Martínez, M Pilar; Guerrero-Contreras, Gabriel; Miró, Elena; Benghazi, Kawtar; Sánchez, Ana I; Garrido, José Luis; Prados, Germán
2017-03-23
Although sleep alterations can be an important factor contributing to the clinical state of Systemic Lupus Erythematosus, there are no studies to adequately assess sleep quality in this type of disease. The aim of this work is to analyse the sleep quality of Systemic Lupus Erythematous (SLE) patients based on more objective information provided by actigraphy and mobile systems. The idea is to carry out a comprehensive study by analysing how environmental conditions and factors can affect sleep quality. In traditional methods the information for assessing sleep quality is obtained through questionnaires. In this work, a novel method is proposed by combining these questionnaires that provide valuable but subjective information with actigraphy and a mobile system to collect more objective information about the patient and their environment. The method provides mechanisms to detect how sleep hygiene could be associated directly with the sleep quality of the subjects, in order to provide a custom intervention to SLE patients. Moreover, this alternative provides ease of use, and non-intrusive ICT (Information and Communication Technology) through a wristband and a mHealth system. The mHealth system has been developed for environmental conditions sensing. This consists of a mobile device with built-in sensors providing input data about the bedroom environment during sleep, and a set of services of the Environmental Monitoring System for properly managing the configuration, registration and fusion of those input data. In previous studies, this information has never been taken into account. However, the information could be relevant in the case of SLE patients. The sample is composed of 9 women with SLE and 11 matched controls with a mean age of 35.78 and 32.18, respectively. Demographic and clinical variables between SLE patients and healthy controls are compared using the Fisher exact test and the Mann-Whitney U test. Relationships between psychological variables, actigraphy measures, and variables related to environmental conditions are analysed with Spearman's rank correlation coefficient. The SLE group showed poorer sleep quality, and more pain intensity, fatigue and depression than the healthy controls. Significant differences between SLE women and healthy controls in measures of actigraphy were not found. However, the fusion of the measures of the environmental conditions that were collected by the mobile system and actigraphy, has shown that light, and more specifically temperature have a direct relation with several measures of actigraphy which are related to sleep quality. It should be emphasize this result because usually the sleep problems are assessment through self-reported measures which had not revealed this association. Moreover, there are no previous studies that analyse these aspects in bedroom environments of SLE patients directly from objective measures. The results indicate the need to complement the subjective evaluation of sleep with objective measures. The use of actigraphy in combination with a new mHealth system provides a complete assessment especially relevant to chronic conditions as SLE. Both systems incorporate this objective information directly from objective measures in a non-intrusive way. Moreover, the measures of bedroom environmental variables provide useful and relevant clinical information to assess what is happening daily and not occasionally. This could lead to more customized interventions and adapt the treatment to each individual.
Two SPSS programs for interpreting multiple regression results.
Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo
2010-02-01
When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.
A quantum causal discovery algorithm
NASA Astrophysics Data System (ADS)
Giarmatzi, Christina; Costa, Fabio
2018-03-01
Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.
Hammerstrom, Donald J.
2013-10-15
A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.
Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald
2011-06-01
Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.
2011-01-01
Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852
Modelling of Cosmic Molecular Masers: Introduction to a Computation Cookbook
NASA Astrophysics Data System (ADS)
Sobolev, Andrej M.; Gray, Malcolm D.
2012-07-01
Numerical modeling of molecular masers is necessary in order to understand their nature and diagnostic capabilities. Model construction requires elaboration of a basic description which allows computation, that is a definition of the parameter space and basic physical relations. Usually, this requires additional thorough studies that can consist of the following stages/parts: relevant molecular spectroscopy and collisional rate coefficients; conditions in and around the masing region (that part of space where population inversion is realized); geometry and size of the masing region (including the question of whether maser spots are discrete clumps or line-of-sight correlations in a much bigger region) and propagation of maser radiation. Output of the maser computer modeling can have the following forms: exploration of parameter space (where do inversions appear in particular maser transitions and their combinations, which parameter values describe a `typical' source, and so on); modeling of individual sources (line flux ratios, spectra, images and their variability); analysis of the pumping mechanism; predictions (new maser transitions, correlations in variability of different maser transitions, and the like). Described schemes (constituents and hierarchy) of the model input and output are based mainly on the experience of the authors and make no claim to be dogmatic.
Long-Term Prediction of the Arctic Ionospheric TEC Based on Time-Varying Periodograms
Liu, Jingbin; Chen, Ruizhi; Wang, Zemin; An, Jiachun; Hyyppä, Juha
2014-01-01
Knowledge of the polar ionospheric total electron content (TEC) and its future variations is of scientific and engineering relevance. In this study, a new method is developed to predict Arctic mean TEC on the scale of a solar cycle using previous data covering 14 years. The Arctic TEC is derived from global positioning system measurements using the spherical cap harmonic analysis mapping method. The study indicates that the variability of the Arctic TEC results in highly time-varying periodograms, which are utilized for prediction in the proposed method. The TEC time series is divided into two components of periodic oscillations and the average TEC. The newly developed method of TEC prediction is based on an extrapolation method that requires no input of physical observations of the time interval of prediction, and it is performed in both temporally backward and forward directions by summing the extrapolation of the two components. The backward prediction indicates that the Arctic TEC variability includes a 9 years period for the study duration, in addition to the well-established periods. The long-term prediction has an uncertainty of 4.8–5.6 TECU for different period sets. PMID:25369066
Improving permafrost distribution modelling using feature selection algorithms
NASA Astrophysics Data System (ADS)
Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail
2016-04-01
The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.
Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat
2015-01-01
The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.
A Framework to Guide the Assessment of Human-Machine Systems.
Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo
2017-03-01
We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.
Creating a non-linear total sediment load formula using polynomial best subset regression model
NASA Astrophysics Data System (ADS)
Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali
2016-08-01
The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.
NASA Astrophysics Data System (ADS)
Pamplona, Fábio Campos; Paes, Eduardo Tavares; Nepomuceno, Aguinaldo
2013-11-01
The temporal and spatial variability of dissolved inorganic nutrients (NO3-, NO2-, NH4+, PO43- and DSi), total nitrogen (TN), total phosphorus (TP), nutrient ratios, suspended particulate matter (SPM) and Chlorophyll-a (Chl-a) were evaluated for the macrotidal estuarine mangrove system of the Quatipuru river (QUATIES), east Amazon coast, North Brazil. Temporal variability was assessed by fortnightly sampling at a fixed station within the middle portion of the estuary, from November 2009 to November 2010. Spatial variability was investigated from two field surveys conducted in November 2009 (dry season) and May 2010 (rainy season), along the salinity gradient of the system. The average DIN (NO3- + NO2- + NH4+) concentration of 9 μM in the dry season was approximately threefold greater in comparison to the rainy season. NH4+ was the main form of DIN in the dry season, while NO3- predominated in the rainy season. The NH4+ concentrations in the water column during the dry season are largely attributed to release by tidal wash-out of the anoxic interstitial waters of the surficial mangrove sediments. On the other hand, the higher NO3- levels during the wet season, suggested that both freshwater inputs and nitrification processes in the water column acted in concert. The river PO43- concentrations (DIP < 1 μM) were low and similar throughout the year. DIN was thus responsible for the major temporal and spatial variability of the dissolved DIN:DIP (N:P) molar ratios and nitrogen corresponded, in general, to the prime limiting nutrient for the sustenance of phytoplankton biomass in the estuary. During the dry season, P-limitation was detected in the upper estuary. PO43- adsorption to SPM was detected during the rainy season and desorption during the dry season along the salinity gradient. In general, the average Chl-a level (14.8 μg L-1) was 2.5 times higher in the rainy season than in the dry season (5.9 μg L-1). On average levels reached maxima at about 14 km from the estuaries' mouth, but shifts of the maximum Chl-a zone were subject to a dynamic displacement influenced by the tidal regime and seasonality of freshwater input. Our results showed that the potential phytoplankton productivity in QUATIES was subject to temporal and spatial variability between N and P limitation. The mangrove forests also played a relevant role as a nutrient source as established by the high variability of the nutrient behaviour along the estuarine gradient, consequently affecting the productivity in QUATIES.
Sobol' sensitivity analysis for stressor impacts on honeybee ...
We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more
Social work perspectives on human behavior.
Wodarski, J S
1993-01-01
This manuscript addresses recent developments in human behavior research that are relevant to social work practice. Specific items addressed are biological aspects of behavior, life span development, cognitive variables, the self-efficacy learning process, the perceptual process, the exchange model, group level variables, macro level variables, and gender and ethnic-racial variables. Where relevant, specific applications to social work practice are provided.
Alvarez-Meza, Andres M.; Orozco-Gutierrez, Alvaro; Castellanos-Dominguez, German
2017-01-01
We introduce Enhanced Kernel-based Relevance Analysis (EKRA) that aims to support the automatic identification of brain activity patterns using electroencephalographic recordings. EKRA is a data-driven strategy that incorporates two kernel functions to take advantage of the available joint information, associating neural responses to a given stimulus condition. Regarding this, a Centered Kernel Alignment functional is adjusted to learning the linear projection that best discriminates the input feature set, optimizing the required free parameters automatically. Our approach is carried out in two scenarios: (i) feature selection by computing a relevance vector from extracted neural features to facilitating the physiological interpretation of a given brain activity task, and (ii) enhanced feature selection to perform an additional transformation of relevant features aiming to improve the overall identification accuracy. Accordingly, we provide an alternative feature relevance analysis strategy that allows improving the system performance while favoring the data interpretability. For the validation purpose, EKRA is tested in two well-known tasks of brain activity: motor imagery discrimination and epileptic seizure detection. The obtained results show that the EKRA approach estimates a relevant representation space extracted from the provided supervised information, emphasizing the salient input features. As a result, our proposal outperforms the state-of-the-art methods regarding brain activity discrimination accuracy with the benefit of enhanced physiological interpretation about the task at hand. PMID:29056897
NASA Astrophysics Data System (ADS)
Rahmati, Mehdi
2017-08-01
Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.
Pujol, Laure; Johnson, Nicholas Brian; Magras, Catherine; Albert, Isabelle; Membré, Jeanne-Marie
2015-10-15
In a previous study, a quantitative microbial exposure assessment (QMEA) model applied to an aseptic-UHT food process was developed [Pujol, L., Albert, I., Magras, C., Johnson, N. B., Membré, J. M. Probabilistic exposure assessment model to estimate aseptic UHT product failure rate. 2015 International Journal of Food Microbiology. 192, 124-141]. It quantified Sterility Failure Rate (SFR) associated with Bacillus cereus and Geobacillus stearothermophilus per process module (nine modules in total from raw material reception to end-product storage). Previously, the probabilistic model inputs were set by experts (using knowledge and in-house data). However, only the variability dimension was taken into account. The model was then improved using expert elicitation knowledge in two ways. First, the model was refined by adding the uncertainty dimension to the probabilistic inputs, enabling to set a second order Monte Carlo analysis. The eight following inputs, and their impact on SFR, are presented in detail in this present study: D-value for each bacteria of interest (B. cereus and G. stearothermophilus) associated with the inactivation model for the UHT treatment step, i.e., two inputs; log reduction (decimal reduction) number associated with the inactivation model for the packaging sterilization step for each bacterium and each part of the packaging (product container and sealing component), i.e., four inputs; and bacterial spore air load of the aseptic tank and the filler cabinet rooms, i.e., two inputs. Second, the model was improved by leveraging expert knowledge to develop further the existing model. The proportion of bacteria in the product which settled on surface of pipes (between the UHT treatment and the aseptic tank on one hand, and between the aseptic tank and the filler cabinet on the other hand) leading to a possible biofilm formation for each bacterium, was better characterized. It was modeled as a function of the hygienic design level of the aseptic-UHT line: the experts provided the model structure and most of the model parameters values. Mean of SFR was estimated to 10×10(-8) (95% Confidence Interval=[0×10(-8); 350×10(-8)]) and 570×10(-8) (95% CI=[380×10(-8); 820×10(-8)]) for B. cereus and G. stearothermophilus, respectively. These estimations were more accurate (since the confidence interval was provided) than those given by the model with only variability (for which the estimates were 15×10(-8) and 580×10(-8) for B. cereus and G. stearothermophilus, respectively). The updated model outputs were also compared with those obtained when inputs were described by a generic distribution, without specific information related to the case-study. Results showed that using a generic distribution can lead to unrealistic estimations (e.g., 3,181,000 product units contaminated by G. stearothermophilus among 10(8) product units produced) and emphasized the added value of eliciting information from experts from the relevant specialist field knowledge. Copyright © 2015 Elsevier B.V. All rights reserved.
Response variables for evaluation of the effectiveness of conservation corridors.
Gregory, Andrew J; Beier, Paul
2014-06-01
Many studies have evaluated effectiveness of corridors by measuring species presence in and movement through small structural corridors. However, few studies have assessed whether these response variables are adequate for assessing whether the conservation goals of the corridors have been achieved or considered the costs or lag times involved in measuring the response variables. We examined 4 response variables-presence of the focal species in the corridor, interpatch movement via the corridor, gene flow, and patch occupancy--with respect to 3 criteria--relevance to conservation goals, lag time (fewest generations at which a positive response to the corridor might be evident with a particular variable), and the cost of a study when applying a particular variable. The presence variable had the least relevance to conservation goals, no lag time advantage compared with interpatch movement, and only a moderate cost advantage over interpatch movement or gene flow. Movement of individual animals between patches was the most appropriate response variable for a corridor intended to provide seasonal migration, but it was not an appropriate response variable for corridor dwellers, and for passage species it was only moderately relevant to the goals of gene flow, demographic rescue, and recolonization. Response variables related to gene flow provided a good trade-off among cost, relevance to conservation goals, and lag time. Nonetheless, the lag time of 10-20 generations means that evaluation of conservation corridors cannot occur until a few decades after a corridor has been established. Response variables related to occupancy were most relevant to conservation goals, but the lag time and costs to detect corridor effects on occupancy were much greater than the lag time and costs to detect corridor effects on gene flow. © 2014 Society for Conservation Biology.
How model and input uncertainty impact maize yield simulations in West Africa
NASA Astrophysics Data System (ADS)
Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli
2015-02-01
Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.
Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy
2017-08-01
Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Hipshman, L
1999-08-01
This study explored the attitudes of biomedical science students (medical students) in a non-Western setting towards three medical ethics concepts that are based on fundamental Western culture ethical principles. A dichotomous (agree/disagree) response questionnaire was constructed using Western ethnocentric culture (WEC) based perspectives of informed consent, confidentiality, and substitute decision-making. Hypothesized WEC-Biased responses were assigned to the questionnaire's questions or propositions. A number of useful responses (169) were obtained from a large, cross-sectional, convenience sample of the MBChB students at the University of Zimbabwe Medical School. Statistical analysis described the differences in response patterns between the student's responses compared to the hypothesized WEC-Biased response. The effect of the nine independent variables on selected dependent variables (responses to certain questionnaire questions) was analyzed by stepwise logistic regression. Students concurred with the hypothesized WEC-Biased responses for two-thirds of the questionnaire items. This agreement included support for the role of legal advocacy in the substitute decision-making process. The students disagreed with the hypothesized WEC-Biased responses in several important medical ethics aspects. Most notably, the students indicated that persons with mental dysfunctions, as a class, were properly considered incompetent to make treatment decisions. None of the studied independent variables was often associated with students' responses, but training year was more frequently implicated than either ethnicity or gender. In order to develop internationally and culturally relevant medical ethics standards, non-Western perspectives ought to be acknowledged and incorporated. Two main areas for further efforts include: curriculum development in ethics reasoning and related clinical (medico-legal) decision-making processes that would be relevant to medical students from various cultures, and; the testing of models that could increase legal system input in the clinical process in societies with limited jurisprudence resources.
Sympathovagal imbalance in hyperthyroidism.
Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H
2001-07-01
We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise.
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P; Ahlfors, Seppo P; Huang, Samantha; Lin, Fa-Hsuan; Raij, Tommi; Sams, Mikko; Vasios, Christos E; Belliveau, John W
2011-03-08
How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.
Perceptual Plasticity for Auditory Object Recognition
Heald, Shannon L. M.; Van Hedger, Stephen C.; Nusbaum, Howard C.
2017-01-01
In our auditory environment, we rarely experience the exact acoustic waveform twice. This is especially true for communicative signals that have meaning for listeners. In speech and music, the acoustic signal changes as a function of the talker (or instrument), speaking (or playing) rate, and room acoustics, to name a few factors. Yet, despite this acoustic variability, we are able to recognize a sentence or melody as the same across various kinds of acoustic inputs and determine meaning based on listening goals, expectations, context, and experience. The recognition process relates acoustic signals to prior experience despite variability in signal-relevant and signal-irrelevant acoustic properties, some of which could be considered as “noise” in service of a recognition goal. However, some acoustic variability, if systematic, is lawful and can be exploited by listeners to aid in recognition. Perceivable changes in systematic variability can herald a need for listeners to reorganize perception and reorient their attention to more immediately signal-relevant cues. This view is not incorporated currently in many extant theories of auditory perception, which traditionally reduce psychological or neural representations of perceptual objects and the processes that act on them to static entities. While this reduction is likely done for the sake of empirical tractability, such a reduction may seriously distort the perceptual process to be modeled. We argue that perceptual representations, as well as the processes underlying perception, are dynamically determined by an interaction between the uncertainty of the auditory signal and constraints of context. This suggests that the process of auditory recognition is highly context-dependent in that the identity of a given auditory object may be intrinsically tied to its preceding context. To argue for the flexible neural and psychological updating of sound-to-meaning mappings across speech and music, we draw upon examples of perceptual categories that are thought to be highly stable. This framework suggests that the process of auditory recognition cannot be divorced from the short-term context in which an auditory object is presented. Implications for auditory category acquisition and extant models of auditory perception, both cognitive and neural, are discussed. PMID:28588524
Assessment of input uncertainty by seasonally categorized latent variables using SWAT
USDA-ARS?s Scientific Manuscript database
Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...
Speaker Invariance for Phonetic Information: an fMRI Investigation
Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.
2012-01-01
The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714
The input and output management of solid waste using DEA models: A case study at Jengka, Pahang
NASA Astrophysics Data System (ADS)
Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah
2017-08-01
Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.
Data-driven process decomposition and robust online distributed modelling for large-scale processes
NASA Astrophysics Data System (ADS)
Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou
2018-02-01
With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.
NASA Technical Reports Server (NTRS)
Fortenbaugh, R. L.
1980-01-01
Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.
Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.
Mohan, B M; Sinha, Arpita
2008-07-01
This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.
2010-08-15
The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less
Fuzzy Neuron: Method and Hardware Realization
NASA Technical Reports Server (NTRS)
Krasowski, Michael J.; Prokop, Norman F.
2014-01-01
This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.
Group interaction and flight crew performance
NASA Technical Reports Server (NTRS)
Foushee, H. Clayton; Helmreich, Robert L.
1988-01-01
The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.
Comparison and Enumeration of Chemical Graphs
Akutsu, Tatsuya; Nagamochi, Hiroshi
2013-01-01
Chemical compounds are usually represented as graph structured data in computers. In this review article, we overview several graph classes relevant to chemical compounds and the computational complexities of several fundamental problems for these graph classes. In particular, we consider the following problems: determining whether two chemical graphs are identical, determining whether one input chemical graph is a part of the other input chemical graph, finding a maximum common part of two input graphs, finding a reaction atom mapping, enumerating possible chemical graphs, and enumerating stereoisomers. We also discuss the relationship between the fifth problem and kernel functions for chemical compounds. PMID:24688697
Stochastic analysis of multiphase flow in porous media: II. Numerical simulations
NASA Astrophysics Data System (ADS)
Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.
1996-08-01
The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.
Simulating maize yield and biomass with spatial variability of soil field capacity
USDA-ARS?s Scientific Manuscript database
Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...
Overview and Results of ISS Space Medicine Operations Team (SMOT) Activities
NASA Technical Reports Server (NTRS)
Johnson, H. Magee; Sargsyan, Ashot E.; Armstrong, Cheryl; McDonald, P. Vernon; Duncan, James M.; Bogomolov, V. V.
2007-01-01
The Space Medicine Operations Team (SMOT) was created to integrate International Space Station (ISS) Medical Operations, promote awareness of all Partners, provide emergency response capability and management, provide operational input from all Partners for medically relevant concerns, and provide a source of medical input to ISS Mission Management. The viewgraph presentation provides an overview of educational objectives, purpose, operations, products, statistics, and its use in off-nominal situations.
Simulated lumped-parameter system reduced-order adaptive control studies
NASA Technical Reports Server (NTRS)
Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.
1981-01-01
Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.
Sanders, Michael J.; Markstrom, Steven L.; Regan, R. Steven; Atkinson, R. Dwight
2017-09-15
A module for simulation of daily mean water temperature in a network of stream segments has been developed as an enhancement to the U.S. Geological Survey Precipitation Runoff Modeling System (PRMS). This new module is based on the U.S. Fish and Wildlife Service Stream Network Temperature model, a mechanistic, one-dimensional heat transport model. The new module is integrated in PRMS. Stream-water temperature simulation is activated by selection of the appropriate input flags in the PRMS Control File and by providing the necessary additional inputs in standard PRMS input files.This report includes a comprehensive discussion of the methods relevant to the stream temperature calculations and detailed instructions for model input preparation.
A new polytopic approach for the unknown input functional observer design
NASA Astrophysics Data System (ADS)
Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed
2018-03-01
In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.
NASA Astrophysics Data System (ADS)
Romanelli, Fabio; Vaccari, Franco; Altin, Giorgio; Panza, Giuliano
2016-04-01
The procedure we developed, and applied to a few relevant cases, leads to the seismic verification of a building by: a) use of a scenario based neodeterministic approach (NDSHA) for the calculation of the seismic input, and b) control of the numerical modeling of an existing building, using free vibration measurements of the real structure. The key point of this approach is the strict collaboration, from the seismic input definition to the monitoring of the response of the building in the calculation phase, of the seismologist and the civil engineer. The vibrometry study allows the engineer to adjust the computational model in the direction suggested by the experimental result of a physical measurement. Once the model has been calibrated by vibrometric analysis, one can select in the design spectrum the proper range of periods of interest for the structure. Then, the realistic values of spectral acceleration, which include the appropriate amplification obtained through the modeling of a "scenario" input to be applied to the final model, can be selected. Generally, but not necessarily, the "scenario" spectra lead to higher accelerations than those deduced by taking the spectra from the national codes (i.e. NTC 2008, for Italy). The task of the verifier engineer is to act so that the solution of the verification is conservative and realistic. We show some examples of the application of the procedure to some relevant (e.g. schools) buildings of the Trieste Province. The adoption of the scenario input has given in most of the cases an increase of critical elements that have to be taken into account in the design of reinforcements. However, the higher cost associated with the increase of elements to reinforce is reasonable, especially considering the important reduction of the risk level.
NASA Astrophysics Data System (ADS)
Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.
2014-06-01
Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).
NASA Astrophysics Data System (ADS)
Paiewonsky, Pablo; Elison Timm, Oliver
2018-03-01
In this paper, we present a simple dynamic global vegetation model whose primary intended use is auxiliary to the land-atmosphere coupling scheme of a climate model, particularly one of intermediate complexity. The model simulates and provides important ecological-only variables but also some hydrological and surface energy variables that are typically either simulated by land surface schemes or else used as boundary data input for these schemes. The model formulations and their derivations are presented here, in detail. The model includes some realistic and useful features for its level of complexity, including a photosynthetic dependency on light, full coupling of photosynthesis and transpiration through an interactive canopy resistance, and a soil organic carbon dependence for bare-soil albedo. We evaluate the model's performance by running it as part of a simple land surface scheme that is driven by reanalysis data. The evaluation against observational data includes net primary productivity, leaf area index, surface albedo, and diagnosed variables relevant for the closure of the hydrological cycle. In this setup, we find that the model gives an adequate to good simulation of basic large-scale ecological and hydrological variables. Of the variables analyzed in this paper, gross primary productivity is particularly well simulated. The results also reveal the current limitations of the model. The most significant deficiency is the excessive simulation of evapotranspiration in mid- to high northern latitudes during their winter to spring transition. The model has a relative advantage in situations that require some combination of computational efficiency, model transparency and tractability, and the simulation of the large-scale vegetation and land surface characteristics under non-present-day conditions.
Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.
Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea
2017-05-01
Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.
Stochastic empirical loading and dilution model (SELDM) version 1.0.0
Granato, Gregory E.
2013-01-01
The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.
Sleep and memory in healthy children and adolescents - a critical review.
Kopasz, Marta; Loessl, Barbara; Hornyak, Magdolna; Riemann, Dieter; Nissen, Christoph; Piosczyk, Hannah; Voderholzer, Ulrich
2010-06-01
There is mounting evidence that sleep is important for learning, memory and the underlying neural plasticity. This article aims to review published studies that evaluate the association between sleep, its distinct stages and memory systems in healthy children and adolescents. Furthermore it intends to suggest directions for future research. A computerised search of the literature for relevant articles published between 1966 and March 2008 was performed using the keywords "sleep", "memory", "learn", "child", "adolescents", "adolescence" and "teenager". Fifteen studies met the inclusion criteria. Published studies focused on the impact of sleep on working memory and memory consolidation. In summary, most studies support the hypothesis that sleep facilitates working memory as well as memory consolidation in children and adolescents. There is evidence that performance in abstract and complex tasks involving higher brain functions declines more strongly after sleep deprivation than the performance in simple memory tasks. Future studies are needed to better understand the impact of a variety of variables potentially modulating the interplay between sleep and memory, such as developmental stage, socioeconomic burden, circadian factors, or the level of post-learning sensory and motor activity (interference). This line of research can provide valuable input relevant to teaching, learning and public health policy. Copyright 2009 Elsevier Ltd. All rights reserved.
Evaluating variable rate fungicide applications for control of Sclerotinia
USDA-ARS?s Scientific Manuscript database
Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...
Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
Code of Federal Regulations, 2014 CFR
2014-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2013 CFR
2013-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2012 CFR
2012-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Code of Federal Regulations, 2011 CFR
2011-10-01
... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.
Straver, J M; Janssen, A F W; Linnemann, A R; van Boekel, M A J S; Beumer, R R; Zwietering, M H
2007-09-01
This study aimed to characterize the number of Salmonella on chicken breast filet at the retail level and to evaluate if this number affects the risk of salmonellosis. From October to December 2005, 220 chilled raw filets (without skin) were collected from five local retail outlets in The Netherlands. Filet rinses that were positive after enrichment were enumerated with a three-tube most-probable-number (MPN) assay. Nineteen filets (8.6%) were contaminated above the detection limit of the MPN method (10 Salmonella per filet). The number of Salmonella on positive filets varied from 1 to 3.81 log MPN per filet. The obtained enumeration data were applied in a risk assessment model. The model considered possible growth during domestic storage, cross-contamination from filet via a cutting board to lettuce, and possible illness due to consumption of the prepared lettuce. A screening analysis with expected-case and worst-case estimates for the input values of the model showed that variability in the inputs was of relevance. Therefore, a Monte Carlo simulation with probability distributions for the inputs was carried out to predict the annual number of illnesses. Remarkably, over two-thirds of annual predicted illnesses were caused by the small fraction of filets containing more than 3 log Salmonella at retail (0.8% of all filets). The enumeration results can be used to confirm this hypothesis in a more elaborate risk assessment. Modeling of the supply chain can provide insight for possible intervention strategies to reduce the incidence of rare, but extreme levels. Reduction seems feasible within current practices, because the retail market study indicated a significant difference between suppliers.
Sensitivity analysis of a sound absorption model with correlated inputs
NASA Astrophysics Data System (ADS)
Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.
2017-04-01
Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.
Code of Federal Regulations, 2010 CFR
2010-07-01
..., Demand Side Variability, and Network Variability studies, including input data, processing programs, and... should include the product or product groups carried under each listed contract; (k) Spreadsheets and...
Neural architecture design based on extreme learning machine.
Bueno-Crespo, Andrés; García-Laencina, Pedro J; Sancho-Gómez, José-Luis
2013-12-01
Selection of the optimal neural architecture to solve a pattern classification problem entails to choose the relevant input units, the number of hidden neurons and its corresponding interconnection weights. This problem has been widely studied in many research works but their solutions usually involve excessive computational cost in most of the problems and they do not provide a unique solution. This paper proposes a new technique to efficiently design the MultiLayer Perceptron (MLP) architecture for classification using the Extreme Learning Machine (ELM) algorithm. The proposed method provides a high generalization capability and a unique solution for the architecture design. Moreover, the selected final network only retains those input connections that are relevant for the classification task. Experimental results show these advantages. Copyright © 2013 Elsevier Ltd. All rights reserved.
The Comparison of Visual Working Memory Representations with Perceptual Inputs
Hyun, Joo-seok; Woodman, Geoffrey F.; Vogel, Edward K.; Hollingworth, Andrew
2008-01-01
The human visual system can notice differences between memories of previous visual inputs and perceptions of new visual inputs, but the comparison process that detects these differences has not been well characterized. This study tests the hypothesis that differences between the memory of a stimulus array and the perception of a new array are detected in a manner that is analogous to the detection of simple features in visual search tasks. That is, just as the presence of a task-relevant feature in visual search can be detected in parallel, triggering a rapid shift of attention to the object containing the feature, the presence of a memory-percept difference along a task-relevant dimension can be detected in parallel, triggering a rapid shift of attention to the changed object. Supporting evidence was obtained in a series of experiments that examined manual reaction times, saccadic reaction times, and event-related potential latencies. However, these experiments also demonstrated that a slow, limited-capacity process must occur before the observer can make a manual change-detection response. PMID:19653755
DOE Office of Scientific and Technical Information (OSTI.GOV)
Na, Man Gyun; Oh, Seungrohk
A neuro-fuzzy inference system combined with the wavelet denoising, principal component analysis (PCA), and sequential probability ratio test (SPRT) methods has been developed to monitor the relevant sensor using the information of other sensors. The parameters of the neuro-fuzzy inference system that estimates the relevant sensor signal are optimized by a genetic algorithm and a least-squares algorithm. The wavelet denoising technique was applied to remove noise components in input signals into the neuro-fuzzy system. By reducing the dimension of an input space into the neuro-fuzzy system without losing a significant amount of information, the PCA was used to reduce themore » time necessary to train the neuro-fuzzy system, simplify the structure of the neuro-fuzzy inference system, and also, make easy the selection of the input signals into the neuro-fuzzy system. By using the residual signals between the estimated signals and the measured signals, the SPRT is applied to detect whether the sensors are degraded or not. The proposed sensor-monitoring algorithm was verified through applications to the pressurizer water level, the pressurizer pressure, and the hot-leg temperature sensors in pressurized water reactors.« less
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Technical Reports Server (NTRS)
Cull, R. C.; Eltimsahy, A. H.
1982-01-01
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Investigation of energy management strategies for photovoltaic systems - An analysis technique
NASA Astrophysics Data System (ADS)
Cull, R. C.; Eltimsahy, A. H.
Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.
Troyer, T W; Miller, K D
1997-07-01
To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.
Analysis on electronic control unit of continuously variable transmission
NASA Astrophysics Data System (ADS)
Cao, Shuanggui
Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.
Simple Sensitivity Analysis for Orion GNC
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
Guzmán-Bárcenas, José; Hernández, José Alfredo; Arias-Martínez, Joel; Baptista-González, Héctor; Ceballos-Reyes, Guillermo; Irles, Claudine
2016-07-21
Leptin and insulin levels are key factors regulating fetal and neonatal energy homeostasis, development and growth. Both biomarkers are used as predictors of weight gain and obesity during infancy. There are currently no prediction algorithms for cord blood (UCB) hormone levels using Artificial Neural Networks (ANN) that have been directly trained with anthropometric maternal and neonatal data, from neonates exposed to distinct metabolic environments during pregnancy (obese with or without gestational diabetes mellitus or lean women). The aims were: 1) to develop ANN models that simulate leptin and insulin concentrations in UCB based on maternal and neonatal data (ANN perinatal model) or from only maternal data during early gestation (ANN prenatal model); 2) To evaluate the biological relevance of each parameter (maternal and neonatal anthropometric variables). We collected maternal and neonatal anthropometric data (n = 49) in normoglycemic healthy lean, obese or obese with gestational diabetes mellitus women, as well as determined UCB leptin and insulin concentrations by ELISA. The ANN perinatal model consisted of an input layer of 12 variables (maternal and neonatal anthropometric and biochemical data from early gestation and at term) while the ANN prenatal model used only 6 variables (maternal anthropometric from early gestation) in the input layer. For both networks, the output layer contained 1 variable to UCB leptin or to UCB insulin concentration. The best architectures for the ANN perinatal models estimating leptin and insulin were 12-5-1 while for the ANN prenatal models, 6-5-1 and 6-4-1 were found for leptin and insulin, respectively. ANN models presented an excellent agreement between experimental and simulated values. Interestingly, the use of only prenatal maternal anthropometric data was sufficient to estimate UCB leptin and insulin values. Maternal BMI, weight and age as well as neonatal birth were the most influential parameters for leptin while maternal morbidity was the most significant factor for insulin prediction. Low error percentage and short computing time makes these ANN models interesting in a translational research setting, to be applied for the prediction of neonatal leptin and insulin values from maternal anthropometric data, and possibly the on-line estimation during pregnancy.
Spatial patterns of throughfall isotopic composition at the event and seasonal timescales
Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell
2015-01-01
Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...
NASA Astrophysics Data System (ADS)
Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.
2017-12-01
Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.
NASA Astrophysics Data System (ADS)
Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.
2016-09-01
Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.
Baken, Stijn; Degryse, Fien; Verheyen, Liesbeth; Merckx, Roel; Smolders, Erik
2011-04-01
Dissolved organic matter (DOM) in surface waters affects the fate and environmental effects of trace metals. We measured variability in the Cd, Cu, Ni, and Zn affinity of 23 DOM samples isolated by reverse osmosis from freshwaters in natural, agricultural, and urban areas. Affinities at uniform pH and ionic composition were assayed at low, environmentally relevant free Cd, Cu, Ni, and Zn activities. The C-normalized metal binding of DOM varied 4-fold (Cu) or about 10-fold (Cd, Ni, Zn) among samples. The dissolved organic carbon concentration ranged only 9-fold in the waters, illustrating that DOM quality is an equally important parameter for metal complexation as DOM quantity. The UV-absorbance of DOM explained metal affinity only for waters receiving few urban inputs, indicating that in those waters, aromatic humic substances are the dominant metal chelators. Larger metal affinities were found for DOM from waters with urban inputs. Aminopolycarboxylate ligands (mainly EDTA) were detected at concentrations up to 0.14 μM and partly explained the larger metal affinity. Nickel concentrations in these surface waters are strongly related to EDTA concentrations (R2=0.96) and this is underpinned by speciation calculations. It is concluded that metal complexation in waters with anthropogenic discharges is larger than that estimated with models that only take into account binding on humic substances.
Application of uncertainty and sensitivity analysis to the air quality SHERPA modelling tool
NASA Astrophysics Data System (ADS)
Pisoni, E.; Albrecht, D.; Mara, T. A.; Rosati, R.; Tarantola, S.; Thunis, P.
2018-06-01
Air quality has significantly improved in Europe over the past few decades. Nonetheless we still find high concentrations in measurements mainly in specific regions or cities. This dimensional shift, from EU-wide to hot-spot exceedances, calls for a novel approach to regional air quality management (to complement EU-wide existing policies). The SHERPA (Screening for High Emission Reduction Potentials on Air quality) modelling tool was developed in this context. It provides an additional tool to be used in support to regional/local decision makers responsible for the design of air quality plans. It is therefore important to evaluate the quality of the SHERPA model, and its behavior in the face of various kinds of uncertainty. Uncertainty and sensitivity analysis techniques can be used for this purpose. They both reveal the links between assumptions and forecasts, help in-model simplification and may highlight unexpected relationships between inputs and outputs. Thus, a policy steered SHERPA module - predicting air quality improvement linked to emission reduction scenarios - was evaluated by means of (1) uncertainty analysis (UA) to quantify uncertainty in the model output, and (2) by sensitivity analysis (SA) to identify the most influential input sources of this uncertainty. The results of this study provide relevant information about the key variables driving the SHERPA output uncertainty, and advise policy-makers and modellers where to place their efforts for an improved decision-making process.
Kamiyama, Akikazu; Fujita, Kazuhisa; Kashimori, Yoshiki
2016-12-01
Visual recognition involves bidirectional information flow, which consists of bottom-up information coding from retina and top-down information coding from higher visual areas. Recent studies have demonstrated the involvement of early visual areas such as primary visual area (V1) in recognition and memory formation. V1 neurons are not passive transformers of sensory inputs but work as adaptive processor, changing their function according to behavioral context. Top-down signals affect tuning property of V1 neurons and contribute to the gating of sensory information relevant to behavior. However, little is known about the neuronal mechanism underlying the gating of task-relevant information in V1. To address this issue, we focus on task-dependent tuning modulations of V1 neurons in two tasks of perceptual learning. We develop a model of the V1, which receives feedforward input from lateral geniculate nucleus and top-down input from a higher visual area. We show here that the change in a balance between excitation and inhibition in V1 connectivity is necessary for gating task-relevant information in V1. The balance change well accounts for the modulations of tuning characteristic and temporal properties of V1 neuronal responses. We also show that the balance change of V1 connectivity is shaped by top-down signals with temporal correlations reflecting the perceptual strategies of the two tasks. We propose a learning mechanism by which synaptic balance is modulated. To conclude, top-down signal changes the synaptic balance between excitation and inhibition in V1 connectivity, enabling early visual area such as V1 to gate context-dependent information under multiple task performances. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Attention-driven auditory cortex short-term plasticity helps segregate relevant sounds from noise
Ahveninen, Jyrki; Hämäläinen, Matti; Jääskeläinen, Iiro P.; Ahlfors, Seppo P.; Huang, Samantha; Raij, Tommi; Sams, Mikko; Vasios, Christos E.; Belliveau, John W.
2011-01-01
How can we concentrate on relevant sounds in noisy environments? A “gain model” suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A “tuning model” suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for “frequency tagging” of attention effects on maskers. Noise masking reduced early (50–150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50–150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise. PMID:21368107
MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE
Cao, Ye; Frey, H. Christopher
2012-01-01
Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732
Howard, Daniel M.; Wylie, Bruce K.; Tieszen, Larry L.
2012-01-01
With an ever expanding population, potential climate variability and an increasing demand for agriculture-based alternative fuels, accurate agricultural land-cover classification for specific crops and their spatial distributions are becoming critical to researchers, policymakers, land managers and farmers. It is important to ensure the sustainability of these and other land uses and to quantify the net impacts that certain management practices have on the environment. Although other quality crop classification products are often available, temporal and spatial coverage gaps can create complications for certain regional or time-specific applications. Our goal was to develop a model capable of classifying major crops in the Greater Platte River Basin (GPRB) for the post-2000 era to supplement existing crop classification products. This study identifies annual spatial distributions and area totals of corn, soybeans, wheat and other crops across the GPRB from 2000 to 2009. We developed a regression tree classification model based on 2.5 million training data points derived from the National Agricultural Statistics Service (NASS) Cropland Data Layer (CDL) in relation to a variety of other relevant input environmental variables. The primary input variables included the weekly 250 m US Geological Survey Earth Observing System Moderate Resolution Imaging Spectroradiometer normalized differential vegetation index, average long-term growing season temperature, average long-term growing season precipitation and yearly start of growing season. An overall model accuracy rating of 78% was achieved for a test sample of roughly 215 000 independent points that were withheld from model training. Ten 250 m resolution annual crop classification maps were produced and evaluated for the GPRB region, one for each year from 2000 to 2009. In addition to the model accuracy assessment, our validation focused on spatial distribution and county-level crop area totals in comparison with the NASS CDL and county statistics from the US Department of Agriculture (USDA) Census of Agriculture. The results showed that our model produced crop classification maps that closely resembled the spatial distribution trends observed in the NASS CDL and exhibited a close linear agreement with county-by-county crop area totals from USDA census data (R 2 = 0.90).
Propagation of variability in railway dynamic simulations: application to virtual homologation
NASA Astrophysics Data System (ADS)
Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke
2012-01-01
Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.
African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption
NASA Astrophysics Data System (ADS)
van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep
2014-05-01
The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.
Thomas E. Dilts; Peter J. Weisberg; Camie M. Dencker; Jeanne C. Chambers
2015-01-01
We have three goals. (1) To develop a suite of functionally relevant climate variables for modelling vegetation distribution on arid and semi-arid landscapes of the Great Basin, USA. (2) To compare the predictive power of vegetation distribution models based on mechanistically proximate factors (water deficit variables) and factors that are more mechanistically removed...
Application of neural networks and sensitivity analysis to improved prediction of trauma survival.
Hunter, A; Kennedy, L; Henry, J; Ferguson, I
2000-05-01
The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.
USDA-ARS?s Scientific Manuscript database
Precipitation patterns and nutrient inputs impact transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Io...
Software development guidelines
NASA Technical Reports Server (NTRS)
Kovalevsky, N.; Underwood, J. M.
1979-01-01
Analysis, modularization, flowcharting, existing programs and subroutines, compatibility, input and output data, adaptability to checkout, and general-purpose subroutines are summarized. Statement ordering and numbering, specification statements, variable names, arrays, arithemtical expressions and statements, control statements, input/output, and subroutines are outlined. Intermediate results, desk checking, checkout data, dumps, storage maps, diagnostics, and program timing are reviewed.
Testing an Instructional Model in a University Educational Setting from the Student's Perspective
ERIC Educational Resources Information Center
Betoret, Fernando Domenech
2006-01-01
We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…
Estuaries in the Pacific Northwest have major intraannual and within estuary variation in sources and magnitudes of nutrient inputs. To develop an approach for setting nutrient criteria for these systems, we conducted a case study for Yaquina Bay, OR based on a synthesis of resea...
NASA Technical Reports Server (NTRS)
Dash, S. M.; Pergament, H. S.
1978-01-01
The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.
Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function
NASA Astrophysics Data System (ADS)
Seo, Sang-Wha; Kim, Yong; Choi, Han Ho
2017-11-01
This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.
NASA Technical Reports Server (NTRS)
Singhal, Surendra N.
2003-01-01
The SAE G-11 RMSL (Reliability, Maintainability, Supportability, and Logistics) Division activities include identification and fulfillment of joint industry, government, and academia needs for development and implementation of RMSL technologies. Four Projects in the Probabilistic Methods area and two in the area of RMSL have been identified. These are: (1) Evaluation of Probabilistic Technology - progress has been made toward the selection of probabilistic application cases. Future effort will focus on assessment of multiple probabilistic softwares in solving selected engineering problems using probabilistic methods. Relevance to Industry & Government - Case studies of typical problems encountering uncertainties, results of solutions to these problems run by different codes, and recommendations on which code is applicable for what problems; (2) Probabilistic Input Preparation - progress has been made in identifying problem cases such as those with no data, little data and sufficient data. Future effort will focus on developing guidelines for preparing input for probabilistic analysis, especially with no or little data. Relevance to Industry & Government - Too often, we get bogged down thinking we need a lot of data before we can quantify uncertainties. Not True. There are ways to do credible probabilistic analysis with little data; (3) Probabilistic Reliability - probabilistic reliability literature search has been completed along with what differentiates it from statistical reliability. Work on computation of reliability based on quantification of uncertainties in primitive variables is in progress. Relevance to Industry & Government - Correct reliability computations both at the component and system level are needed so one can design an item based on its expected usage and life span; (4) Real World Applications of Probabilistic Methods (PM) - A draft of volume 1 comprising aerospace applications has been released. Volume 2, a compilation of real world applications of probabilistic methods with essential information demonstrating application type and timehost savings by the use of probabilistic methods for generic applications is in progress. Relevance to Industry & Government - Too often, we say, 'The Proof is in the Pudding'. With help from many contributors, we hope to produce such a document. Problem is - not too many people are coming forward due to proprietary nature. So, we are asking to document only minimum information including problem description, what method used, did it result in any savings, and how much?; (5) Software Reliability - software reliability concept, program, implementation, guidelines, and standards are being documented. Relevance to Industry & Government - software reliability is a complex issue that must be understood & addressed in all facets of business in industry, government, and other institutions. We address issues, concepts, ways to implement solutions, and guidelines for maximizing software reliability; (6) Maintainability Standards - maintainability/serviceability industry standard/guidelines and industry best practices and methodologies used in performing maintainability/ serviceability tasks are being documented. Relevance to Industry & Government - Any industry or government process, project, and/or tool must be maintained and serviced to realize the life and performance it was designed for. We address issues and develop guidelines for optimum performance & life.
Data driven model generation based on computational intelligence
NASA Astrophysics Data System (ADS)
Gemmar, Peter; Gronz, Oliver; Faust, Christophe; Casper, Markus
2010-05-01
The simulation of discharges at a local gauge or the modeling of large scale river catchments are effectively involved in estimation and decision tasks of hydrological research and practical applications like flood prediction or water resource management. However, modeling such processes using analytical or conceptual approaches is made difficult by both complexity of process relations and heterogeneity of processes. It was shown manifold that unknown or assumed process relations can principally be described by computational methods, and that system models can automatically be derived from observed behavior or measured process data. This study describes the development of hydrological process models using computational methods including Fuzzy logic and artificial neural networks (ANN) in a comprehensive and automated manner. Methods We consider a closed concept for data driven development of hydrological models based on measured (experimental) data. The concept is centered on a Fuzzy system using rules of Takagi-Sugeno-Kang type which formulate the input-output relation in a generic structure like Ri : IFq(t) = lowAND...THENq(t+Δt) = ai0 +ai1q(t)+ai2p(t-Δti1)+ai3p(t+Δti2)+.... The rule's premise part (IF) describes process states involving available process information, e.g. actual outlet q(t) is low where low is one of several Fuzzy sets defined over variable q(t). The rule's conclusion (THEN) estimates expected outlet q(t + Δt) by a linear function over selected system variables, e.g. actual outlet q(t), previous and/or forecasted precipitation p(t ?Δtik). In case of river catchment modeling we use head gauges, tributary and upriver gauges in the conclusion part as well. In addition, we consider temperature and temporal (season) information in the premise part. By creating a set of rules R = {Ri|(i = 1,...,N)} the space of process states can be covered as concise as necessary. Model adaptation is achieved by finding on optimal set A = (aij) of conclusion parameters with respect to a defined rating function and experimental data. To find A, we use for example a linear equation solver and RMSE-function. In practical process models, the number of Fuzzy sets and the according number of rules is fairly low. Nevertheless, creating the optimal model requires some experience. Therefore, we improved this development step by methods for automatic generation of Fuzzy sets, rules, and conclusions. Basically, the model achievement depends to a great extend on the selection of the conclusion variables. It is the aim that variables having most influence on the system reaction being considered and superfluous ones being neglected. At first, we use Kohonen maps, a specialized ANN, to identify relevant input variables from the large set of available system variables. A greedy algorithm selects a comprehensive set of dominant and uncorrelated variables. Next, the premise variables are analyzed with clustering methods (e.g. Fuzzy-C-means) and Fuzzy sets are then derived from cluster centers and outlines. The rule base is automatically constructed by permutation of the Fuzzy sets of the premise variables. Finally, the conclusion parameters are calculated and the total coverage of the input space is iteratively tested with experimental data, rarely firing rules are combined and coarse coverage of sensitive process states results in refined Fuzzy sets and rules. Results The described methods were implemented and integrated in a development system for process models. A series of models has already been built e.g. for rainfall-runoff modeling or for flood prediction (up to 72 hours) in river catchments. The models required significantly less development effort and showed advanced simulation results compared to conventional models. The models can be used operationally and simulation takes only some minutes on a standard PC e.g. for a gauge forecast (up to 72 hours) for the whole Mosel (Germany) river catchment.
Theofilatos, Athanasios; Yannis, George
2017-04-03
Understanding the various factors that affect accident risk is of particular concern to decision makers and researchers. The incorporation of real-time traffic and weather data constitutes a fruitful approach when analyzing accident risk. However, the vast majority of relevant research has no specific focus on vulnerable road users such as powered 2-wheelers (PTWs). Moreover, studies using data from urban roads and arterials are scarce. This study aims to add to the current knowledge by considering real-time traffic and weather data from 2 major urban arterials in the city of Athens, Greece, in order to estimate the effect of traffic, weather, and other characteristics on PTW accident involvement. Because of the high number of candidate variables, a random forest model was applied to reveal the most important variables. Then, the potentially significant variables were used as input to a Bayesian logistic regression model in order to reveal the magnitude of their effect on PTW accident involvement. The results of the analysis suggest that PTWs are more likely to be involved in multivehicle accidents than in single-vehicle accidents. It was also indicated that increased traffic flow and variations in speed have a significant influence on PTW accident involvement. On the other hand, weather characteristics were found to have no effect. The findings of this study can contribute to the understanding of accident mechanisms of PTWs and reduce PTW accident risk in urban arterials.
Learning from adaptive neural dynamic surface control of strict-feedback systems.
Wang, Min; Wang, Cong
2015-06-01
Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.
NASA Astrophysics Data System (ADS)
Dumedah, Gift; Walker, Jeffrey P.
2017-03-01
The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu
Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less
SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos
NASA Astrophysics Data System (ADS)
Ahlfeld, R.; Belkouchi, B.; Montomoli, F.
2016-09-01
A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.
Bittner, J.W.; Biscardi, R.W.
1991-03-19
An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.
Bittner, John W.; Biscardi, Richard W.
1991-01-01
An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.
An exact algebraic solution of the infimum in H-infinity optimization with output feedback
NASA Technical Reports Server (NTRS)
Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi
1991-01-01
This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.
Scenario planning for water resource management in semi arid zone
NASA Astrophysics Data System (ADS)
Gupta, Rajiv; Kumar, Gaurav
2018-06-01
Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.
Automatic insulation resistance testing apparatus
Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.
2005-06-14
An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.
A stochastic model of input effectiveness during irregular gamma rhythms.
Dumont, Grégory; Northoff, Georg; Longtin, André
2016-02-01
Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.
TTLEM: Open access tool for building numerically accurate landscape evolution models in MATLAB
NASA Astrophysics Data System (ADS)
Campforts, Benjamin; Schwanghart, Wolfgang; Govers, Gerard
2017-04-01
Despite a growing interest in LEMs, accuracy assessment of the numerical methods they are based on has received little attention. Here, we present TTLEM which is an open access landscape evolution package designed to develop and test your own scenarios and hypothesises. TTLEM uses a higher order flux-limiting finite-volume method to simulate river incision and tectonic displacement. We show that this scheme significantly influences the evolution of simulated landscapes and the spatial and temporal variability of erosion rates. Moreover, it allows the simulation of lateral tectonic displacement on a fixed grid. Through the use of a simple GUI the software produces visible output of evolving landscapes through model run time. In this contribution, we illustrate numerical landscape evolution through a set of movies spanning different spatial and temporal scales. We focus on the erosional domain and use both spatially constant and variable input values for uplift, lateral tectonic shortening, erodibility and precipitation. Moreover, we illustrate the relevance of a stochastic approach for realistic hillslope response modelling. TTLEM is a fully open source software package, written in MATLAB and based on the TopoToolbox platform (topotoolbox.wordpress.com). Installation instructions can be found on this website and the therefore designed GitHub repository.
New Microwave-Based Missions Applications for Rainfed Crops Characterization
NASA Astrophysics Data System (ADS)
Sánchez, N.; Lopez-Sanchez, J. M.; Arias-Pérez, B.; Valcarce-Diñeiro, R.; Martínez-Fernández, J.; Calvo-Heras, J. M.; Camps, A.; González-Zamora, A.; Vicente-Guijalba, F.
2016-06-01
A multi-temporal/multi-sensor field experiment was conducted within the Soil Moisture Measurement Stations Network of the University of Salamanca (REMEDHUS) in Spain, in order to retrieve useful information from satellite Synthetic Aperture Radar (SAR) and upcoming Global Navigation Satellite Systems Reflectometry (GNSS-R) missions. The objective of the experiment was first to identify which radar observables are most sensitive to the development of crops, and then to define which crop parameters the most affect the radar signal. A wide set of radar variables (backscattering coefficients and polarimetric indicators) acquired by Radarsat-2 were analyzed and then exploited to determine variables characterizing the crops. Field measurements were fortnightly taken at seven cereals plots between February and July, 2015. This work also tried to optimize the crop characterization through Landsat-8 estimations, testing and validating parameters such as the leaf area index, the fraction of vegetation cover and the vegetation water content, among others. Some of these parameters showed significant and relevant correlation with the Landsat-derived Normalized Difference Vegetation Index (R>0.60). Regarding the radar observables, the parameters the best characterized were biomass and height, which may be explored for inversion using SAR data as an input. Moreover, the differences in the correlations found for the different crops under study types suggested a way to a feasible classification of crops.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feijoo, M.; Mestre, F.; Castagnaro, A.
This study evaluates the potential effect of climate change on Dry-bean production in Argentina, combining climate models, a crop productivity model and a yield response model estimation of climate variables on crop yields. The study was carried out in the North agricultural regions of Jujuy, Salta, Santiago del Estero and Tucuman which include the largest areas of Argentina where dry beans are grown as a high input crop. The paper combines the output from a crop model with different techniques of analysis. The scenarios used in this study were generated from the output of two General Circulation Models (GCMs): themore » Goddard Institute for Space Studies model (GISS) and the Canadian Climate Change Model (CCCM). The study also includes a preliminary evaluation of the potential changes in monetary returns taking into account the possible variability of yields and prices, using mean-Gini stochastic dominance (MGSD). The results suggest that large climate change may have a negative impact on the Argentine agriculture sector, due to the high relevance of this product in the export sector. The difference negative effect depends on the varieties of dry bean and also the General Circulation Model scenarios considered for double levels of atmospheric carbon dioxide.« less
Distributed Monitoring of the R(sup 2) Statistic for Linear Regression
NASA Technical Reports Server (NTRS)
Bhaduri, Kanishka; Das, Kamalika; Giannella, Chris R.
2011-01-01
The problem of monitoring a multivariate linear regression model is relevant in studying the evolving relationship between a set of input variables (features) and one or more dependent target variables. This problem becomes challenging for large scale data in a distributed computing environment when only a subset of instances is available at individual nodes and the local data changes frequently. Data centralization and periodic model recomputation can add high overhead to tasks like anomaly detection in such dynamic settings. Therefore, the goal is to develop techniques for monitoring and updating the model over the union of all nodes data in a communication-efficient fashion. Correctness guarantees on such techniques are also often highly desirable, especially in safety-critical application scenarios. In this paper we develop DReMo a distributed algorithm with very low resource overhead, for monitoring the quality of a regression model in terms of its coefficient of determination (R2 statistic). When the nodes collectively determine that R2 has dropped below a fixed threshold, the linear regression model is recomputed via a network-wide convergecast and the updated model is broadcast back to all nodes. We show empirically, using both synthetic and real data, that our proposed method is highly communication-efficient and scalable, and also provide theoretical guarantees on correctness.
Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.
Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti
2014-01-01
Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347
Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A
2013-01-15
This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.
Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu
2017-01-01
Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215
Floré, Vincent; Willems, Rik
2012-12-01
In this review, we focus on temporal variability of cardiac repolarization. This phenomenon has been related to a higher risk for ventricular arrhythmia and is therefore interesting as a marker of sudden cardiac death risk. We review two non-invasive clinical techniques quantifying repolarization variability: T-wave alternans (TWA) and beat-to-beat variability of repolarization (BVR). We discuss their pathophysiological link with ventricular arrhythmia and the current clinical relevance of these techniques.
Opportunities to Participate in Pesticide Reevaluation
Submitting relevant information through public dockets during comment periods, and other means of input and involvement, is vital to effective reevaluation of pesticides. Find out more about opportunities for public involvement in registration review.
Machine Learning Techniques for Global Sensitivity Analysis in Climate Models
NASA Astrophysics Data System (ADS)
Safta, C.; Sargsyan, K.; Ricciuto, D. M.
2017-12-01
Climate models studies are not only challenged by the compute intensive nature of these models but also by the high-dimensionality of the input parameter space. In our previous work with the land model components (Sargsyan et al., 2014) we identified subsets of 10 to 20 parameters relevant for each QoI via Bayesian compressive sensing and variance-based decomposition. Nevertheless the algorithms were challenged by the nonlinear input-output dependencies for some of the relevant QoIs. In this work we will explore a combination of techniques to extract relevant parameters for each QoI and subsequently construct surrogate models with quantified uncertainty necessary to future developments, e.g. model calibration and prediction studies. In the first step, we will compare the skill of machine-learning models (e.g. neural networks, support vector machine) to identify the optimal number of classes in selected QoIs and construct robust multi-class classifiers that will partition the parameter space in regions with smooth input-output dependencies. These classifiers will be coupled with techniques aimed at building sparse and/or low-rank surrogate models tailored to each class. Specifically we will explore and compare sparse learning techniques with low-rank tensor decompositions. These models will be used to identify parameters that are important for each QoI. Surrogate accuracy requirements are higher for subsequent model calibration studies and we will ascertain the performance of this workflow for multi-site ALM simulation ensembles.
NASA Astrophysics Data System (ADS)
Herrera, J. L.; Rosón, G.; Varela, R. A.; Piedracoba, S.
2008-07-01
The key features of the western Galician shelf hydrography and dynamics are analyzed on a solid statistical and experimental basis. The results allowed us to gather together information dispersed in previous oceanographic works of the region. Empirical orthogonal functions analysis and a canonical correlation analysis were applied to a high-resolution dataset collected from 47 surveys done on a weekly frequency from May 2001 to May 2002. The main results of these analyses are summarized bellow. Salinity, temperature and the meridional component of the residual current are correlated with the relevant local forcings (the meridional coastal wind component and the continental run-off) and with a remote forcing (the meridional temperature gradient at latitude 37°N). About 80% of the salinity and temperature total variability over the shelf, and 37% of the residual meridional current total variability are explained by two EOFs for each variable. Up to 22% of the temperature total variability and 14% of the residual meridional current total variability is devoted to the set up of cross-shore gradients of the thermohaline properties caused by the wind-induced Ekman transport. Up to 11% and 10%, respectively, is related to the variability of the meridional temperature gradient at the Western Iberian Winter Front. About 30% of the temperature total variability can be explained by the development and erosion of the seasonal thermocline and by the seasonal variability of the thermohaline properties of the central waters. This thermocline presented unexpected low salinity values due to the trapping during spring and summer of the high continental inputs from the River Miño recorded in 2001. The low salinity plumes can be traced on the Galician shelf during almost all the annual cycle; they tend to be extended throughout the entire water column under downwelling conditions and concentrate in the surface layer when upwelling favourable winds blow. Our evidences point to the meridional temperature gradient acting as an important controlling factor of the central waters thermohaline properties and in the development and decay of the Iberian Poleward Current.
Hoppie, Lyle O.
1982-01-12
Disclosed are several embodiments of a regenerative braking device for an automotive vehicle. The device includes a plurality of rubber rollers (24, 26) mounted for rotation between an input shaft (14) connectable to the vehicle drivetrain and an output shaft (16) which is drivingly connected to the input shaft by a variable ratio transmission (20). When the transmission ratio is such that the input shaft rotates faster than the output shaft, the rubber rollers are torsionally stressed to accumulate energy, thereby slowing the vehicle. When the transmission ratio is such that the output shaft rotates faster than the input shaft, the rubber rollers are torsionally relaxed to deliver accumulated energy, thereby accelerating or driving the vehicle.
Post, R.F.
1958-11-11
An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.
EMPIRE: A code for nuclear astrophysics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palumbo, A.
The nuclear reaction code EMPIRE is presented as a useful tool for nuclear astrophysics. EMPIRE combines a variety of the reaction models with a comprehensive library of input parameters providing a diversity of options for the user. With exclusion of the directsemidirect capture all reaction mechanisms relevant to the nuclear astrophysics energy range of interest are implemented in the code. Comparison to experimental data show consistent agreement for all relevant channels.
NASA Astrophysics Data System (ADS)
Elarab, Manal; Ticlavilca, Andres M.; Torres-Rua, Alfonso F.; Maslova, Inga; McKee, Mac
2015-12-01
Precision agriculture requires high-resolution information to enable greater precision in the management of inputs to production. Actionable information about crop and field status must be acquired at high spatial resolution and at a temporal frequency appropriate for timely responses. In this study, high spatial resolution imagery was obtained through the use of a small, unmanned aerial system called AggieAirTM. Simultaneously with the AggieAir flights, intensive ground sampling for plant chlorophyll was conducted at precisely determined locations. This study reports the application of a relevance vector machine coupled with cross validation and backward elimination to a dataset composed of reflectance from high-resolution multi-spectral imagery (VIS-NIR), thermal infrared imagery, and vegetative indices, in conjunction with in situ SPAD measurements from which chlorophyll concentrations were derived, to estimate chlorophyll concentration from remotely sensed data at 15-cm resolution. The results indicate that a relevance vector machine with a thin plate spline kernel type and kernel width of 5.4, having LAI, NDVI, thermal and red bands as the selected set of inputs, can be used to spatially estimate chlorophyll concentration with a root-mean-squared-error of 5.31 μg cm-2, efficiency of 0.76, and 9 relevance vectors.
Duijn, Chantal C M A; Welink, Lisanne S; Bok, Harold G J; Ten Cate, Olle T J
2018-06-01
Clinical training programs increasingly use entrustable professional activities (EPAs) as focus of assessment. However, questions remain about which information should ground decisions to trust learners. This qualitative study aimed to identify decision variables in the workplace that clinical teachers find relevant in the elaboration of the entrustment decision processes. The findings can substantiate entrustment decision-making in the clinical workplace. Focus groups were conducted with medical and veterinary clinical teachers, using the structured consensus method of the Nominal Group Technique to generate decision variables. A ranking was made based on a relevance score assigned by the clinical teachers to the different decision variables. Field notes, audio recordings and flip chart lists were analyzed and subsequently translated and, as a form of axial coding, merged into one list, combining the decision variables that were similar in their meaning. A list of 11 and 17 decision variables were acknowledged as relevant by the medical and veterinary teacher groups, respectively. The focus groups yielded 21 unique decision variables that were considered relevant to inform readiness to perform a clinical task on a designated level of supervision. The decision variables consisted of skills, generic qualities, characteristics, previous performance or other information. We were able to group the decision variables into five categories: ability, humility, integrity, reliability and adequate exposure. To entrust a learner to perform a task at a specific level of supervision, a supervisor needs information to support such a judgement. This trust cannot be credited on a single case at a single moment of assessment, but requires different variables and multiple sources of information. This study provides an overview of decision variables giving evidence to justify the multifactorial process of making an entrustment decision.
NASA Astrophysics Data System (ADS)
Sheridan, Gary; nyman, petter; Duff, Tom; Baillie, Craig; Bovill, William; Lane, Patrick; Tolhurst, Kevin
2015-04-01
The prediction of fuel moisture content is important for estimating the rate of spread of wildfires, the ignition probability of firebrands, and for the efficient scheduling of prescribed fire. The moisture content of fine surface fuels varies spatially at large scales (10's to 100's km) due to variation in meteorological variables (eg. temperature, relative humidity, precipitation). At smaller scales (100's of metres) in steep topography spatial variability is attributed to topographic influences that include differences in radiation due to aspect and slope, differences in precipitation, temperature and relative humidity due to elevation, and differences in soil moisture due to hillslope drainage position. Variable forest structure and canopy shading adds further to the spatial variability in surface fuel moisture. In this study we aim to combine daily 5km resolution gridded weather data with 20m resolution DEM and vegetation structure data to predict the spatial variability of fine surface fuels in steep topography. Microclimate stations were established in south east Australia to monitor surface fine fuel moisture continuously (every 15 minutes) using newly developed instrumented litter packs, in addition to temperature and relative humidity measurements inside the litter pack, and measurement of precipitation and energy inputs above and below the forest canopy. Microclimate stations were established across a gradient of aspect (5 stations), drainage position (7 stations), elevation (15 stations), and canopy cover conditions (6 stations). The data from this extensive network of microclimate stations across a broad spectrum of topographic conditions is being analysed to enable the downscaling of gridded weather data to spatial scales that are relevant to the connectivity of wildfire fuels and to the scheduling and outcome of prescribed fires. The initial results from the first year of this study are presented here.
ERIC Educational Resources Information Center
Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.
2017-01-01
This study tested the impact of child-directed language input on language development in Spanish-English bilingual infants (N = 25, 11- and 14-month-olds from the Seattle metropolitan area), across languages and independently for each language, controlling for socioeconomic status. Language input was characterized by social interaction variables,…
TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow
NASA Technical Reports Server (NTRS)
Chang, J. F.; Lan, C. Edward
1987-01-01
The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.
MURI: Impact of Oceanographic Variability on Acoustic Communications
2011-09-01
multiplexing ( OFDM ), multiple- input/multiple-output ( MIMO ) transmissions, and multi-user single-input/multiple-output (SIMO) communications. Lastly... MIMO - OFDM communications: Receiver design for Doppler distorted underwater acoustic channels,” Proc. Asilomar Conf. on Signals, Systems, and... MIMO ) will be of particular interest. Validating experimental data will be obtained during the ONR acoustic communications experiment in summer 2008
Simple Sensitivity Analysis for Orion Guidance Navigation and Control
NASA Technical Reports Server (NTRS)
Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar
2013-01-01
The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.
NASA Astrophysics Data System (ADS)
Carter, Jeffrey R.; Simon, Wayne E.
1990-08-01
Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and
Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach
NASA Astrophysics Data System (ADS)
Chowdhury, R.; Adhikari, S.
2012-10-01
Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.
Use of Gene Expression Programming in regionalization of flow duration curve
NASA Astrophysics Data System (ADS)
Hashmi, Muhammad Z.; Shamseldin, Asaad Y.
2014-06-01
In this paper, a recently introduced artificial intelligence technique known as Gene Expression Programming (GEP) has been employed to perform symbolic regression for developing a parametric scheme of flow duration curve (FDC) regionalization, to relate selected FDC characteristics to catchment characteristics. Stream flow records of selected catchments located in the Auckland Region of New Zealand were used. FDCs of the selected catchments were normalised by dividing the ordinates by their median value. Input for the symbolic regression analysis using GEP was (a) selected characteristics of normalised FDCs; and (b) 26 catchment characteristics related to climate, morphology, soil properties and land cover properties obtained using the observed data and GIS analysis. Our study showed that application of this artificial intelligence technique expedites the selection of a set of the most relevant independent variables out of a large set, because these are automatically selected through the GEP process. Values of the FDC characteristics obtained from the developed relationships have high correlations with the observed values.
NASA Technical Reports Server (NTRS)
Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.
1996-01-01
We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.
Haberland, M; Kim, S
2015-02-02
When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.
Miconi, Thomas
2017-01-01
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior. DOI: http://dx.doi.org/10.7554/eLife.20899.001 PMID:28230528
Miconi, Thomas
2017-02-23
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
Eskofier, Bjoern M; Kraus, Martin; Worobets, Jay T; Stefanyshyn, Darren J; Nigg, Benno M
2012-01-01
The identification of differences between groups is often important in biomechanics. This paper presents group classification tasks using kinetic and kinematic data from a prospective running injury study. Groups composed of gender, of shod/barefoot running and of runners who developed patellofemoral pain syndrome (PFPS) during the study, and asymptotic runners were classified. The features computed from the biomechanical data were deliberately chosen to be generic. Therefore, they were suited for different biomechanical measurements and classification tasks without adaptation to the input signals. Feature ranking was applied to reveal the relevance of each feature to the classification task. Data from 80 runners were analysed for gender and shod/barefoot classification, while 12 runners were investigated in the injury classification task. Gender groups could be differentiated with 84.7%, shod/barefoot running with 98.3%, and PFPS with 100% classification rate. For the latter group, one single variable could be identified that alone allowed discrimination.
Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M
2017-10-01
Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.
Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).
NASA Technical Reports Server (NTRS)
Newsom, J. R.; Mukhopadhyay, V.
1983-01-01
A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.
Guo, Changning; Doub, William H; Kauffman, John F
2010-08-01
Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association
NASA Astrophysics Data System (ADS)
Behera, Kishore Kumar; Pal, Snehanshu
2018-03-01
This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.
Variability of perceptual multistability: from brain state to individual trait
Kleinschmidt, Andreas; Sterzer, Philipp; Rees, Geraint
2012-01-01
Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input. PMID:22371620
Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator
NASA Astrophysics Data System (ADS)
Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.
2018-04-01
The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.
Moreira, Fabiana Tavares; Prantoni, Alessandro Lívio; Martini, Bruno; de Abreu, Michelle Alves; Stoiev, Sérgio Biato; Turra, Alexander
2016-01-15
Microplastics such as pellets have been reported for many years on sandy beaches around the globe. Nevertheless, high variability is observed in their estimates and distribution patterns across the beach environment are still to be unravelled. Here, we investigate the small-scale temporal and spatial variability in the abundance of pellets in the intertidal zone of a sandy beach and evaluate factors that can increase the variability in data sets. The abundance of pellets was estimated during twelve consecutive tidal cycles, identifying the position of the high tide between cycles and sampling drift-lines across the intertidal zone. We demonstrate that beach dynamic processes such as the overlap of strandlines and artefacts of the methods can increase the small-scale variability. The results obtained are discussed in terms of the methodological considerations needed to understand the distribution of pellets in the beach environment, with special implications for studies focused on patterns of input. Copyright © 2015 Elsevier Ltd. All rights reserved.
Data analytics using canonical correlation analysis and Monte Carlo simulation
NASA Astrophysics Data System (ADS)
Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles
2017-07-01
A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.
ERIC Educational Resources Information Center
Aguilar, Jessica M.; Plante, Elena; Sandoval, Michelle
2018-01-01
Purpose: Variability in the input plays an important role in language learning. The current study examined the role of object variability for new word learning by preschoolers with specific language impairment (SLI). Method: Eighteen 4- and 5-year-old children with SLI were taught 8 new words in 3 short activities over the course of 3 sessions.…
Muñoz-Tamayo, R; Puillet, L; Daniel, J B; Sauvant, D; Martin, O; Taghipoor, M; Blavy, P
2018-04-01
What is a good (useful) mathematical model in animal science? For models constructed for prediction purposes, the question of model adequacy (usefulness) has been traditionally tackled by statistical analysis applied to observed experimental data relative to model-predicted variables. However, little attention has been paid to analytic tools that exploit the mathematical properties of the model equations. For example, in the context of model calibration, before attempting a numerical estimation of the model parameters, we might want to know if we have any chance of success in estimating a unique best value of the model parameters from available measurements. This question of uniqueness is referred to as structural identifiability; a mathematical property that is defined on the sole basis of the model structure within a hypothetical ideal experiment determined by a setting of model inputs (stimuli) and observable variables (measurements). Structural identifiability analysis applied to dynamic models described by ordinary differential equations (ODEs) is a common practice in control engineering and system identification. This analysis demands mathematical technicalities that are beyond the academic background of animal science, which might explain the lack of pervasiveness of identifiability analysis in animal science modelling. To fill this gap, in this paper we address the analysis of structural identifiability from a practitioner perspective by capitalizing on the use of dedicated software tools. Our objectives are (i) to provide a comprehensive explanation of the structural identifiability notion for the community of animal science modelling, (ii) to assess the relevance of identifiability analysis in animal science modelling and (iii) to motivate the community to use identifiability analysis in the modelling practice (when the identifiability question is relevant). We focus our study on ODE models. By using illustrative examples that include published mathematical models describing lactation in cattle, we show how structural identifiability analysis can contribute to advancing mathematical modelling in animal science towards the production of useful models and, moreover, highly informative experiments via optimal experiment design. Rather than attempting to impose a systematic identifiability analysis to the modelling community during model developments, we wish to open a window towards the discovery of a powerful tool for model construction and experiment design.
Mathematical Model of Solidification During Electroslag Casting of Pilger Roll
NASA Astrophysics Data System (ADS)
Liu, Fubin; Li, Huabing; Jiang, Zhouhua; Dong, Yanwu; Chen, Xu; Geng, Xin; Zang, Ximin
A mathematical model for describing the interaction of multiple physical fields in slag bath and solidification process in ingot during pilger roll casting with variable cross-section which is produced by the electroslag casting (ESC) process was developed. The commercial software ANSYS was applied to calculate the electromagnetic field, magnetic driven fluid flow, buoyancy-driven flow and heat transfer. The transportation phenomenon in slag bath and solidification characteristic of ingots are analyzed for variable cross-section with variable input power under the conditions of 9Cr3NiMo steel and 70%CaF2 - 30%Al2O3 slag system. The calculated results show that characteristic of current density distribution, velocity patterns and temperature profiles in the slag bath and metal pool profiles in ingot have distinct difference at variable cross-sections due to difference of input power and cooling condition. The pool shape and the local solidification time (LST) during Pilger roll ESC process are analyzed.
Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo
Fontaine, Bertrand; Peña, José Luis; Brette, Romain
2014-01-01
Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397
Otero, José; Palacios, Ana; Suárez, Rosario; Junco, Luis
2014-01-01
When selecting relevant inputs in modeling problems with low quality data, the ranking of the most informative inputs is also uncertain. In this paper, this issue is addressed through a new procedure that allows the extending of different crisp feature selection algorithms to vague data. The partial knowledge about the ordinal of each feature is modelled by means of a possibility distribution, and a ranking is hereby applied to sort these distributions. It will be shown that this technique makes the most use of the available information in some vague datasets. The approach is demonstrated in a real-world application. In the context of massive online computer science courses, methods are sought for automatically providing the student with a qualification through code metrics. Feature selection methods are used to find the metrics involved in the most meaningful predictions. In this study, 800 source code files, collected and revised by the authors in classroom Computer Science lectures taught between 2013 and 2014, are analyzed with the proposed technique, and the most relevant metrics for the automatic grading task are discussed. PMID:25114967
The strength of attentional biases reduces as visual short-term memory load increases
Shimi, A.
2013-01-01
Despite our visual system receiving irrelevant input that competes with task-relevant signals, we are able to pursue our perceptual goals. Attention enhances our visual processing by biasing the processing of the input that is relevant to the task at hand. The top-down signals enabling these biases are therefore important for regulating lower level sensory mechanisms. In three experiments, we examined whether we apply similar biases to successfully maintain information in visual short-term memory (VSTM). We presented participants with targets alongside distracters and we graded their perceptual similarity to vary the extent to which they competed. Experiments 1 and 2 showed that the more items held in VSTM before the onset of the distracters, the more perceptually distinct the distracters needed to be for participants to retain the target accurately. Experiment 3 extended these behavioral findings by demonstrating that the perceptual similarity between target and distracters exerted a significantly greater effect on occipital alpha amplitudes, depending on the number of items already held in VSTM. The trade-off between VSTM load and target-distracter competition suggests that VSTM and perceptual competition share a partially overlapping mechanism, namely top-down inputs into sensory areas. PMID:23576694
Trends in soil moisture and real evapotranspiration in Douro River for the period 1980-2010
NASA Astrophysics Data System (ADS)
García-Valdecasas-Ojeda, Matilde; de Franciscis, Sebastiano; Raquel Gámiz-Fortis, Sonia; Castro-Díez, Yolanda; Jesús Esteban-Parra, María
2017-04-01
This study analyzes the evolution of different hydrological variables, such as soil moisture and real evapotranspiration, for the last 30 years, in the Douro Basin, the most extensive basin in the Iberian Peninsula. The different components of the real evaporation, connected to the soil moisture content, can be important when analyzing the intensity of droughts and heat waves, and particularly relevant for the study of the climate change impacts. The real evapotranspiration and soil moisture data are provided by simulations obtained using the Variable Infiltration Capacity (VIC) hydrological model. This model is a large-scale hydrologic model and allows estimates of different variables in the hydrological system of a basin. Land surface is modeled as a grid of large and uniform cells with sub-grid heterogeneity (e.g. land cover), while water influx is local, only depending from the interaction between grid cells and local atmosphere environment. Observational data of temperature and precipitation from Spain02 dataset are used as input variables for VIC model. The simulations have a spatial resolution of about 9 km, and the analysis is carried out on a seasonal time-scale. Additionally, we compare these results with those obtained from a dynamical downscaling driven by ERA-Interim data using the Weather Research and Forecasting (WRF) model, with the same spatial resolution. The results obtained from Spain02 data show a decrease in soil moisture at different parts of the basin during spring and summer, meanwhile soil moisture seems to be increased for autumn. No significant changes are found for real evapotranspiration. Keywords: real evapotranspiration, soil moisture, Douro Basin, trends, VIC, WRF. Acknowledgements: This work has been financed by the projects P11-RNM-7941 (Junta de Andalucía-Spain) and CGL2013-48539-R (MINECO-Spain, FEDER).
Relevance, Derogation and Permission
NASA Astrophysics Data System (ADS)
Stolpe, Audun
We show that a recently developed theory of positive permission based on the notion of derogation is hampered by a triviality result that indicates a problem with the underlying full-meet contraction operation. We suggest a solution that presupposes a particular normal form for codes of norms, adapted from the theory of relevance through propositional letter sharing. We then establish a correspondence between contractions on sets of norms in input/output logic (derogations), and AGM-style contractions on sets of formulae, and use it as a bridge to migrate results on propositional relevance from the latter to the former idiom. Changing the concept accordingly we show that positive permission now incorporates a relevance requirement that wards off triviality.
Prediction of Scour Depth around Bridge Piers using Adaptive Neuro-Fuzzy Inference Systems (ANFIS)
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Zhang, Hanqing
2014-05-01
Earth's surface is continuously shaped due to the action of geophysical flows. Erosion due to the flow of water in river systems has been identified as a key problem in preserving ecological health of river systems but also a threat to our built environment and critical infrastructure, worldwide. As an example, it has been estimated that a major reason for bridge failure is due to scour. Even though the flow past bridge piers has been investigated both experimentally and numerically, and the mechanisms of scouring are relatively understood, there still lacks a tool that can offer fast and reliable predictions. Most of the existing formulas for prediction of bridge pier scour depth are empirical in nature, based on a limited range of data or for piers of specific shape. In this work, the application of a Machine Learning model that has been successfully employed in Water Engineering, namely an Adaptive Neuro-Fuzzy Inference System (ANFIS) is proposed to estimate the scour depth around bridge piers. In particular, various complexity architectures are sequentially built, in order to identify the optimal for scour depth predictions, using appropriate training and validation subsets obtained from the USGS database (and pre-processed to remove incomplete records). The model has five variables, namely the effective pier width (b), the approach velocity (v), the approach depth (y), the mean grain diameter (D50) and the skew to flow. Simulations are conducted with data groups (bed material type, pier type and shape) and different number of input variables, to produce reduced complexity and easily interpretable models. Analysis and comparison of the results indicate that the developed ANFIS model has high accuracy and outstanding generalization ability for prediction of scour parameters. The effective pier width (as opposed to skew to flow) is amongst the most relevant input parameters for the estimation.
A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds
Goldberg, Jesse H.
2012-01-01
The pallido-recipient thalamus transmits information from the basal ganglia (BG) to the cortex and plays a critical role motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the BG, but the role of non-pallidal inputs, such as excitatory inputs from cortex, is unclear. We have recorded simultaneously from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a BG-recipient thalamic nucleus necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone, and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor ‘cortical’ nucleus also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals important for exploratory behavior and learning. PMID:22327474
Prestressed elastomer for energy storage
Hoppie, Lyle O.; Speranza, Donald
1982-01-01
Disclosed is a regenerative braking device for an automotive vehicle. The device includes a power isolating assembly (14), an infinitely variable transmission (20) interconnecting an input shaft (16) with an output shaft (18), and an energy storage assembly (22). The storage assembly includes a plurality of elastomeric rods (44, 46) mounted for rotation and connected in series between the input and output shafts. The elastomeric rods are prestressed along their rotational or longitudinal axes to inhibit buckling of the rods due to torsional stressing of the rods in response to relative rotation of the input and output shafts.
Feeney, Daniel F; Mani, Diba; Enoka, Roger M
2018-06-07
We investigated the associations between grooved pegboard times, force steadiness (coefficient of variation for force), and variability in an estimate of the common synaptic input to motor neurons innervating the wrist extensor muscles during steady contractions performed by young and older adults. The discharge times of motor units were derived from recordings obtained with high-density surface electrodes while participants performed steady isometric contractions at 10% and 20% of maximal voluntary contraction (MVC) force. The steady contractions were performed with a pinch grip and wrist extension, both independently (single action) and concurrently (double action). The variance in common synaptic input to motor neurons was estimated with a state-space model of the latent common input dynamics. There was a statistically significant association between the coefficient of variation for force during the steady contractions and the estimated variance in common synaptic input in young (r 2 = 0.31) and older (r 2 = 0.39) adults, but not between either the mean or the coefficient of variation for interspike interval of single motor units with the coefficient of variation for force. Moreover, the estimated variance in common synaptic input during the double-action task with the wrist extensors at the 20% target was significantly associated with grooved pegboard time (r 2 = 0.47) for older adults, but not young adults. These findings indicate that longer pegboard times of older adults were associated with worse force steadiness and greater fluctuations in the estimated common synaptic input to motor neurons during steady contractions. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.
Job Search and Social Cognitive Theory: The Role of Career-Relevant Activities
ERIC Educational Resources Information Center
Zikic, Jelena; Saks, Alan M.
2009-01-01
Social cognitive theory was used to explain the relationships between career-relevant activities (environmental and self career exploration, career resources, and training), self-regulatory variables (job search self-efficacy and job search clarity), variables from the Theory of Planned Behavior (job search attitude, subjective norm, job search…
A Model of Career Success: A Longitudinal Study of Emergency Physicians
ERIC Educational Resources Information Center
Pachulicz, Sarah; Schmitt, Neal; Kuljanin, Goran
2008-01-01
Objective and subjective career success were hypothesized to mediate the relationships between sociodemographic variables, human capital indices, individual difference variables, and organizational sponsorship as inputs and a retirement decision and intentions to leave either the specialty of emergency medicine (EM) or medicine as output…
Relationship between cotton yield and soil electrical conductivity, topography, and landsat imagery
USDA-ARS?s Scientific Manuscript database
Understanding spatial and temporal variability in crop yield is a prerequisite to implementing site-specific management of crop inputs. Apparent soil electrical conductivity (ECa), soil brightness, and topography are easily obtained data that can explain yield variability. The objectives of this stu...
Parker, Scott L; Sivaganesan, Ahilan; Chotai, Silky; McGirt, Matthew J; Asher, Anthony L; Devin, Clinton J
2018-06-15
OBJECTIVE Hospital readmissions lead to a significant increase in the total cost of care in patients undergoing elective spine surgery. Understanding factors associated with an increased risk of postoperative readmission could facilitate a reduction in such occurrences. The aims of this study were to develop and validate a predictive model for 90-day hospital readmission following elective spine surgery. METHODS All patients undergoing elective spine surgery for degenerative disease were enrolled in a prospective longitudinal registry. All 90-day readmissions were prospectively recorded. For predictive modeling, all covariates were selected by choosing those variables that were significantly associated with readmission and by incorporating other relevant variables based on clinical intuition and the Akaike information criterion. Eighty percent of the sample was randomly selected for model development and 20% for model validation. Multiple logistic regression analysis was performed with Bayesian model averaging (BMA) to model the odds of 90-day readmission. Goodness of fit was assessed via the C-statistic, that is, the area under the receiver operating characteristic curve (AUC), using the training data set. Discrimination (predictive performance) was assessed using the C-statistic, as applied to the 20% validation data set. RESULTS A total of 2803 consecutive patients were enrolled in the registry, and their data were analyzed for this study. Of this cohort, 227 (8.1%) patients were readmitted to the hospital (for any cause) within 90 days postoperatively. Variables significantly associated with an increased risk of readmission were as follows (OR [95% CI]): lumbar surgery 1.8 [1.1-2.8], government-issued insurance 2.0 [1.4-3.0], hypertension 2.1 [1.4-3.3], prior myocardial infarction 2.2 [1.2-3.8], diabetes 2.5 [1.7-3.7], and coagulation disorder 3.1 [1.6-5.8]. These variables, in addition to others determined a priori to be clinically relevant, comprised 32 inputs in the predictive model constructed using BMA. The AUC value for the training data set was 0.77 for model development and 0.76 for model validation. CONCLUSIONS Identification of high-risk patients is feasible with the novel predictive model presented herein. Appropriate allocation of resources to reduce the postoperative incidence of readmission may reduce the readmission rate and the associated health care costs.
Input filter compensation for switching regulators
NASA Technical Reports Server (NTRS)
Kelkar, S. S.; Lee, F. C.
1983-01-01
A novel input filter compensation scheme for a buck regulator that eliminates the interaction between the input filter output impedance and the regulator control loop is presented. The scheme is implemented using a feedforward loop that senses the input filter state variables and uses this information to modulate the duty cycle signal. The feedforward design process presented is seen to be straightforward and the feedforward easy to implement. Extensive experimental data supported by analytical results show that significant performance improvement is achieved with the use of feedforward in the following performance categories: loop stability, audiosusceptibility, output impedance and transient response. The use of feedforward results in isolating the switching regulator from its power source thus eliminating all interaction between the regulator and equipment upstream. In addition the use of feedforward removes some of the input filter design constraints and makes the input filter design process simpler thus making it possible to optimize the input filter. The concept of feedforward compensation can also be extended to other types of switching regulators.
MAPSIT and a Roadmap for Lunar and Planetary Spatial Data Infrastructure
NASA Astrophysics Data System (ADS)
Radebaugh, J.; Archinal, B.; Beyer, R.; DellaGiustina, D.; Fassett, C.; Gaddis, L.; Hagerty, J.; Hare, T.; Laura, J.; Lawrence, S. J.; Mazarico, E.; Naß, A.; Patthoff, A.; Skinner, J.; Sutton, S.; Thomson, B. J.; Williams, D.
2017-10-01
We describe MAPSIT, and the development of a roadmap for lunar and planetary SDI, based on previous relevant documents and community input, and consider how to best advance lunar science, exploration, and commercial development.
The proactive brain: memory for predictions
Bar, Moshe
2009-01-01
It is proposed that the human brain is proactive in that it continuously generates predictions that anticipate the relevant future. In this proposal, analogies are derived from elementary information that is extracted rapidly from the input, to link that input with the representations that exist in memory. Finding an analogical link results in the generation of focused predictions via associative activation of representations that are relevant to this analogy, in the given context. Predictions in complex circumstances, such as social interactions, combine multiple analogies. Such predictions need not be created afresh in new situations, but rather rely on existing scripts in memory, which are the result of real as well as of previously imagined experiences. This cognitive neuroscience framework provides a new hypothesis with which to consider the purpose of memory, and can help explain a variety of phenomena, ranging from recognition to first impressions, and from the brain's ‘default mode’ to a host of mental disorders. PMID:19528004
Learning feature representations with a cost-relevant sparse autoencoder.
Längkvist, Martin; Loutfi, Amy
2015-02-01
There is an increasing interest in the machine learning community to automatically learn feature representations directly from the (unlabeled) data instead of using hand-designed features. The autoencoder is one method that can be used for this purpose. However, for data sets with a high degree of noise, a large amount of the representational capacity in the autoencoder is used to minimize the reconstruction error for these noisy inputs. This paper proposes a method that improves the feature learning process by focusing on the task relevant information in the data. This selective attention is achieved by weighting the reconstruction error and reducing the influence of noisy inputs during the learning process. The proposed model is trained on a number of publicly available image data sets and the test error rate is compared to a standard sparse autoencoder and other methods, such as the denoising autoencoder and contractive autoencoder.
Characterisation of vibration input to flywheel used on urban bus
NASA Astrophysics Data System (ADS)
Wang, L.; Kanarachos, S.; Christensen, J.
2016-09-01
Vibration induced from road surface has an impact on the durability and reliability of electrical and mechanical components attached on the vehicle. There is little research published relevant to the durability assessment of a flywheel energy recovery system installed on city and district buses. Relevant international standards and legislations were reviewed and large discrepancy was found among them, in addition, there are no standards exclusively developed for kinetic energy recovery systems on vehicles. This paper describes the experimentation of assessment of road surface vibration input to the flywheel on a bus as obtained at the MIRA Proving Ground. Power density spectra have been developed based on the raw data obtained during the experimentation. Validation of this model will be carried out using accelerated life time tests that will be carried out on a shaker rig using an accumulated profile based on the theory of fatigue damage equivalence in time and frequency domain aligned with the model predictions.
Arena, Paolo; Calí, Marco; Patané, Luca; Portera, Agnese; Strauss, Roland
2016-09-01
Classification and sequence learning are relevant capabilities used by living beings to extract complex information from the environment for behavioral control. The insect world is full of examples where the presentation time of specific stimuli shapes the behavioral response. On the basis of previously developed neural models, inspired by Drosophila melanogaster, a new architecture for classification and sequence learning is here presented under the perspective of the Neural Reuse theory. Classification of relevant input stimuli is performed through resonant neurons, activated by the complex dynamics generated in a lattice of recurrent spiking neurons modeling the insect Mushroom Bodies neuropile. The network devoted to context formation is able to reconstruct the learned sequence and also to trace the subsequences present in the provided input. A sensitivity analysis to parameter variation and noise is reported. Experiments on a roving robot are reported to show the capabilities of the architecture used as a neural controller.
Decoding thalamic afferent input using microcircuit spiking activity
Sederberg, Audrey J.; Palmer, Stephanie E.
2015-01-01
A behavioral response appropriate to a sensory stimulus depends on the collective activity of thousands of interconnected neurons. The majority of cortical connections arise from neighboring neurons, and thus understanding the cortical code requires characterizing information representation at the scale of the cortical microcircuit. Using two-photon calcium imaging, we densely sampled the thalamically evoked response of hundreds of neurons spanning multiple layers and columns in thalamocortical slices of mouse somatosensory cortex. We then used a biologically plausible decoder to characterize the representation of two distinct thalamic inputs, at the level of the microcircuit, to reveal those aspects of the activity pattern that are likely relevant to downstream neurons. Our data suggest a sparse code, distributed across lamina, in which a small population of cells carries stimulus-relevant information. Furthermore, we find that, within this subset of neurons, decoder performance improves when noise correlations are taken into account. PMID:25695647
Decoding thalamic afferent input using microcircuit spiking activity.
Sederberg, Audrey J; Palmer, Stephanie E; MacLean, Jason N
2015-04-01
A behavioral response appropriate to a sensory stimulus depends on the collective activity of thousands of interconnected neurons. The majority of cortical connections arise from neighboring neurons, and thus understanding the cortical code requires characterizing information representation at the scale of the cortical microcircuit. Using two-photon calcium imaging, we densely sampled the thalamically evoked response of hundreds of neurons spanning multiple layers and columns in thalamocortical slices of mouse somatosensory cortex. We then used a biologically plausible decoder to characterize the representation of two distinct thalamic inputs, at the level of the microcircuit, to reveal those aspects of the activity pattern that are likely relevant to downstream neurons. Our data suggest a sparse code, distributed across lamina, in which a small population of cells carries stimulus-relevant information. Furthermore, we find that, within this subset of neurons, decoder performance improves when noise correlations are taken into account. Copyright © 2015 the American Physiological Society.
Production Economics of Private Forestry: A Comparison of Industrial and Nonindustrial Forest Owners
David H. Newman; David N. Wear
1993-01-01
This paper compares the producrion behavior of industrial and nonindustrial private forestland owners in the southeastern U.S. using a restricted profit function. Profits are modeled as a function of two outputs, sawtimber and pulpwood. one variable input, regeneration effort. and two quasi-fixed inputs, land and growing stock. Although an identical profit function is...
Nitrogen input from residential lawn care practices in suburban watersheds in Baltimore county, MD
Neely L. Law; Lawrence E. Band; J. Morgan Grove
2004-01-01
A residential lawn care survey was conducted as part of the Baltimore Ecosystem Study, a Long-term Ecological Research project funded by the National Science Foundation and collaborating agencies, to estimate the nitrogen input to urban watersheds from lawn care practices. The variability in the fertilizer N application rates and the factors affecting the application...
Darold P. Batzer; Brian J. Palik
2007-01-01
Aquatic invertebrates are crucial components of foodwebs in seasonal woodland ponds, and leaf litter is probably the most important food resource for those organisms. We quantified the influence of leaf litter inputs on aquatic invertebrates in two seasonal woodland ponds using an interception experiment. Ponds were hydrologically split using a sandbag-plastic barrier...
J.W. Hornbeck; S.W. Bailey; D.C. Buso; J.B. Shanley
1997-01-01
Chemistry of precipitation and streamwater and resulting input-output budgets for nutrient ions were determined concurrently for three years on three upland, forested watersheds located within an 80 km radius in central New England. Chemistry of precipitation and inputs of nutrients via wet deposition were similar among the three watersheds and were generally typical...
Fusion of Hard and Soft Information in Nonparametric Density Estimation
2015-06-10
and stochastic optimization models, in analysis of simulation output, and when instantiating probability models. We adopt a constrained maximum...particular, density estimation is needed for generation of input densities to simulation and stochastic optimization models, in analysis of simulation output...an essential step in simulation analysis and stochastic optimization is the generation of probability densities for input random variables; see for
Variable input observer for structural health monitoring of high-rate systems
NASA Astrophysics Data System (ADS)
Hong, Jonathan; Laflamme, Simon; Cao, Liang; Dodson, Jacob
2017-02-01
The development of high-rate structural health monitoring methods is intended to provide damage detection on timescales of 10 µs -10ms where speed of detection is critical to maintain structural integrity. Here, a novel Variable Input Observer (VIO) coupled with an adaptive observer is proposed as a potential solution for complex high-rate problems. The VIO is designed to adapt its input space based on real-time identification of the system's essential dynamics. By selecting appropriate time-delayed coordinates defined by both a time delay and an embedding dimension, the proper input space is chosen which allows more accurate estimations of the current state and a reduction of the convergence rate. The optimal time-delay is estimated based on mutual information, and the embedding dimension is based on false nearest neighbors. A simulation of the VIO is conducted on a two degree-of-freedom system with simulated damage. Results are compared with an adaptive Luenberger observer, a fixed time-delay observer, and a Kalman Filter. Under its preliminary design, the VIO converges significantly faster than the Luenberger and fixed observer. It performed similarly to the Kalman Filter in terms of convergence, but with greater accuracy.
Family medicine outpatient encounters are more complex than those of cardiology and psychiatry.
Katerndahl, David; Wood, Robert; Jaén, Carlos Roberto
2011-01-01
comparison studies suggest that the guideline-concordant care provided for specific medical conditions is less optimal in primary care compared with cardiology and psychiatry settings. The purpose of this study is to estimate the relative complexity of patient encounters in general/family practice, cardiology, and psychiatry settings. secondary analysis of the 2000 National Ambulatory Medical Care Survey data for ambulatory patients seen in general/family practice, cardiology, and psychiatry settings was performed. The complexity for each variable was estimated as the quantity weighted by variability and diversity. there is minimal difference in the unadjusted input and total encounter complexity of general/family practice and cardiology; psychiatry's input is less complex. Cardiology encounters involved more input quantitatively, but the diversity of general/family practice input eliminated the difference. Cardiology also involved more complex output. However, when the duration of visit is factored in, the complexity of care provided per hour in general/family practice is 33% more relative to cardiology and 5 times more relative to psychiatry. care during family physician visits is more complex per hour than the care during visits to cardiologists or psychiatrists. This may account for a lower rate of completion of process items measured for quality of care.
Uncertainty Analysis and Parameter Estimation For Nearshore Hydrodynamic Models
NASA Astrophysics Data System (ADS)
Ardani, S.; Kaihatu, J. M.
2012-12-01
Numerical models represent deterministic approaches used for the relevant physical processes in the nearshore. Complexity of the physics of the model and uncertainty involved in the model inputs compel us to apply a stochastic approach to analyze the robustness of the model. The Bayesian inverse problem is one powerful way to estimate the important input model parameters (determined by apriori sensitivity analysis) and can be used for uncertainty analysis of the outputs. Bayesian techniques can be used to find the range of most probable parameters based on the probability of the observed data and the residual errors. In this study, the effect of input data involving lateral (Neumann) boundary conditions, bathymetry and off-shore wave conditions on nearshore numerical models are considered. Monte Carlo simulation is applied to a deterministic numerical model (the Delft3D modeling suite for coupled waves and flow) for the resulting uncertainty analysis of the outputs (wave height, flow velocity, mean sea level and etc.). Uncertainty analysis of outputs is performed by random sampling from the input probability distribution functions and running the model as required until convergence to the consistent results is achieved. The case study used in this analysis is the Duck94 experiment, which was conducted at the U.S. Army Field Research Facility at Duck, North Carolina, USA in the fall of 1994. The joint probability of model parameters relevant for the Duck94 experiments will be found using the Bayesian approach. We will further show that, by using Bayesian techniques to estimate the optimized model parameters as inputs and applying them for uncertainty analysis, we can obtain more consistent results than using the prior information for input data which means that the variation of the uncertain parameter will be decreased and the probability of the observed data will improve as well. Keywords: Monte Carlo Simulation, Delft3D, uncertainty analysis, Bayesian techniques, MCMC
Chien, Jung Hung; Mukherjee, Mukul; Siu, Ka-Chun; Stergiou, Nicholas
2016-05-01
When maintaining postural stability temporally under increased sensory conflict, a more rigid response is used where the available degrees of freedom are essentially frozen. The current study investigated if such a strategy is also utilized during more dynamic situations of postural control as is the case with walking. This study attempted to answer this question by using the Locomotor Sensory Organization Test (LSOT). This apparatus incorporates SOT inspired perturbations of the visual and the somatosensory system. Ten healthy young adults performed the six conditions of the traditional SOT and the corresponding six conditions on the LSOT. The temporal structure of sway variability was evaluated from all conditions. The results showed that in the anterior posterior direction somatosensory input is crucial for postural control for both walking and standing; visual input also had an effect but was not as prominent as the somatosensory input. In the medial lateral direction and with respect to walking, visual input has a much larger effect than somatosensory input. This is possibly due to the added contributions by peripheral vision during walking; in standing such contributions may not be as significant for postural control. In sum, as sensory conflict increases more rigid and regular sway patterns are found during standing confirming the previous results presented in the literature, however the opposite was the case with walking where more exploratory and adaptive movement patterns are present.
A dual-input nonlinear system analysis of autonomic modulation of heart rate
NASA Technical Reports Server (NTRS)
Chon, K. H.; Mullen, T. J.; Cohen, R. J.
1996-01-01
Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.
NASA Astrophysics Data System (ADS)
Batzias, Dimitris F.; Ifanti, Konstantina
2012-12-01
Process simulation models are usually empirical, therefore there is an inherent difficulty in serving as carriers for knowledge acquisition and technology transfer, since their parameters have no physical meaning to facilitate verification of the dependence on the production conditions; in such a case, a 'black box' regression model or a neural network might be used to simply connect input-output characteristics. In several cases, scientific/mechanismic models may be proved valid, in which case parameter identification is required to find out the independent/explanatory variables and parameters, which each parameter depends on. This is a difficult task, since the phenomenological level at which each parameter is defined is different. In this paper, we have developed a methodological framework under the form of an algorithmic procedure to solve this problem. The main parts of this procedure are: (i) stratification of relevant knowledge in discrete layers immediately adjacent to the layer that the initial model under investigation belongs to, (ii) design of the ontology corresponding to these layers, (iii) elimination of the less relevant parts of the ontology by thinning, (iv) retrieval of the stronger interrelations between the remaining nodes within the revised ontological network, and (v) parameter identification taking into account the most influential interrelations revealed in (iv). The functionality of this methodology is demonstrated by quoting two representative case examples on wastewater treatment.
Looking for a relevant potential evapotranspiration model at the watershed scale
NASA Astrophysics Data System (ADS)
Oudin, L.; Hervieu, F.; Michel, C.; Perrin, C.; Anctil, F.; Andréassian, V.
2003-04-01
In this paper, we try to identify the most relevant approach to calculate Potential Evapotranspiration (PET) for use in a daily watershed model, to try to bring an answer to the following question: "how can we use commonly available atmospheric parameters to represent the evaporative demand at the catchment scale?". Hydrologists generally see the Penman model as the ideal model regarding to its good adequacy with lysimeter measurements and its physically-based formulation. However, in real-world engineering situations, where meteorological stations are scarce, hydrologists are often constrained to use other PET formulae with less data requirements or/and long-term average of PET values (the rationale being that PET is an inherently conservative variable). We chose to test 28 commonly used PET models coupled with 4 different daily watershed models. For each test, we compare both PET input options: actual data and long-term average data. The comparison is made in terms of streamflow simulation efficiency, over a large sample of 308 watersheds. The watersheds are located in France, Australia and the United States of America and represent varied climates. Strikingly, we find no systematic improvements of the watershed model efficiencies when using actual PET series instead of long-term averages. This suggests either that watershed models may not conveniently use the climatic information contained in PET values or that formulae are only awkward indicators of the real PET which watershed models need.
Dynamics of water droplets detached from porous surfaces of relevance to PEM fuel cells.
Theodorakakos, A; Ous, T; Gavaises, M; Nouri, J M; Nikolopoulos, N; Yanagihara, H
2006-08-15
The detachment of liquid droplets from porous material surfaces used with proton exchange membrane (PEM) fuel cells under the influence of a cross-flowing air is investigated computationally and experimentally. CCD images taken on a purpose-built transparent fuel cell have revealed that the water produced within the PEM is forming droplets on the surface of the gas-diffusion layer. These droplets are swept away if the velocity of the flowing air is above a critical value for a given droplet size. Static and dynamic contact angle measurements for three different carbon gas-diffusion layer materials obtained inside a transparent air-channel test model have been used as input to the numerical model; the latter is based on a Navier-Stokes equations flow solver incorporating the volume of fluid (VOF) two-phase flow methodology. Variable contact angle values around the gas-liquid-solid contact-line as well as their dynamic change during the droplet shape deformation process, have allowed estimation of the adhesion force between the liquid droplet and the solid surface and successful prediction of the separation line at which droplets loose their contact from the solid surface under the influence of the air stream flowing around them. Parametric studies highlight the relevant importance of various factors affecting the detachment of the liquid droplets from the solid surface.
Tidal and spatial variability of nitrous oxide (N2O) in Sado estuary (Portugal)
NASA Astrophysics Data System (ADS)
Gonçalves, Célia; Brogueira, Maria José; Nogueira, Marta
2015-12-01
The estimate of the nitrous oxide (N2O) fluxes is fundamental to assess its impact on global warming. The tidal and spatial variability of N2O and the air-sea fluxes in the Sado estuary in July/August 2007 are examined. Measurements of N2O and other relevant environmental parameters (temperature, salinity, dissolved oxygen and dissolved inorganic nitrogen - nitrate plus nitrite and ammonium) were recorded during two diurnal tidal cycles performed in the Bay and Marateca region and along the estuary during ebb, at spring tide. N2O presented tidal and spatial variability and varied spatially from 5.0 nmol L-1 in Marateca region to 12.5 nmol L-1 in Sado river input. Although the Sado river may constitute a considerable N2O source to the estuary, the respective chemical signal discharge was rapidly lost in the main body of the estuary due to the low river flow during the sampling period. N2O varied with tide similarly between 5.2 nmol L-1 (Marateca) and 10.0 nmol L-1 (Sado Bay), with the maximum value reached two hours after flooding period. The influence of N2O enriched upwelled seawater (˜10.0 nmol L-1) was well visible in the estuary mouth and apparently represented an important contribution of N2O in the main body of Sado estuary. Despite the high water column oxygen saturation in most of Sado estuary, nitrification did not seem a relevant process for N2O production, probably as the concentration of the substrate, NH4+, was not adequate for this process to occur. Most of the estuary functioned as a N2O source, and only Marateca zone has acted as N2O sink. The N2O emission from Sado estuary was estimated to be 3.7 Mg N-N2O yr-1 (FC96) (4.4 Mg N-N2O yr-1, FRC01). These results have implications for future sampling and scaling strategies for estimating greenhouse gases (GHGs) fluxes in tidal ecosystems.
Advantages and applicability of commonly used homogenisation methods for climate data
NASA Astrophysics Data System (ADS)
Ribeiro, Sara; Caineta, Júlio; Henriques, Roberto; Soares, Amílcar; Costa, Ana Cristina
2014-05-01
Homogenisation of climate data is a very relevant subject since these data are required as an input in a wide range of studies, such as atmospheric modelling, weather forecasting, climate change monitoring, or hydrological and environmental projects. Often, climate data series include non-natural irregularities which have to be detected and removed prior to their use, otherwise it would generate biased and erroneous results. Relocation of weather stations or changes in the measuring instruments are amongst the most relevant causes for these inhomogeneities. Depending on the climate variable, its temporal resolution and spatial continuity, homogenisation methods can be more or less effective. For example, due to its natural variability, precipitation is identified as a very challenging variable to be homogenised. During the last two decades, numerous methods have been proposed to homogenise climate data. In order to compare, evaluate and develop those methods, the European project COST Action ES0601, Advances in homogenisation methods of climate series: an integrated approach (HOME), was released in 2008. Existing homogenisation methods were improved based on the benchmark exercise issued by this project. A recent approach based on Direct Sequential Simulation (DSS), not yet evaluated by the benchmark exercise, is also presented as an innovative methodology for homogenising climate data series. DSS already proved to be a successful geostatistical method in environmental and hydrological studies, and it provides promising results for the homogenisation of climate data. Since DSS is a geostatistical stochastic approach, it accounts for the joint spatial and temporal dependence between observations, as well as the relative importance of stations both in terms of distance and correlation. This work presents a chronological review of the most commonly used homogenisation methods for climate data and available software packages. A short description and classification is provided for each method. Their advantages and applicability are discussed based on literature review and on the results of the HOME project. Acknowledgements: The authors gratefully acknowledge the financial support of "Fundação para a Ciência e Tecnologia" (FCT), Portugal, through the research project PTDC/GEO-MET/4026/2012 ("GSIMCLI - Geostatistical simulation with local distributions for the homogenization and interpolation of climate data").
Neuronal pattern separation of motion-relevant input in LIP activity
Berberian, Nareg; MacPherson, Amanda; Giraud, Eloïse; Richardson, Lydia
2016-01-01
In various regions of the brain, neurons discriminate sensory stimuli by decreasing the similarity between ambiguous input patterns. Here, we examine whether this process of pattern separation may drive the rapid discrimination of visual motion stimuli in the lateral intraparietal area (LIP). Starting with a simple mean-rate population model that captures neuronal activity in LIP, we show that overlapping input patterns can be reformatted dynamically to give rise to separated patterns of neuronal activity. The population model predicts that a key ingredient of pattern separation is the presence of heterogeneity in the response of individual units. Furthermore, the model proposes that pattern separation relies on heterogeneity in the temporal dynamics of neural activity and not merely in the mean firing rates of individual neurons over time. We confirm these predictions in recordings of macaque LIP neurons and show that the accuracy of pattern separation is a strong predictor of behavioral performance. Overall, results propose that LIP relies on neuronal pattern separation to facilitate decision-relevant discrimination of sensory stimuli. NEW & NOTEWORTHY A new hypothesis is proposed on the role of the lateral intraparietal (LIP) region of cortex during rapid decision making. This hypothesis suggests that LIP alters the representation of ambiguous inputs to reduce their overlap, thus improving sensory discrimination. A combination of computational modeling, theoretical analysis, and electrophysiological data shows that the pattern separation hypothesis links neural activity to behavior and offers novel predictions on the role of LIP during sensory discrimination. PMID:27881719
Methods for freshwater riverine input into regional ocean models
NASA Astrophysics Data System (ADS)
Herzfeld, M.
2015-06-01
The input of freshwater at the coast in regional models is a non-trivial exercise that has been studied extensively in the past. Several issues are of relevance; firstly, estuaries process water properties along their length, so that while freshwater may enter at the estuary head, it is no longer fresh at the mouth. Secondly, models create a numerical response that results in excessive upstream or offshore transport compared to what is typically observed. The cause of this has been traced to the lack of landward flow at the coast where freshwater is input. In this study we assess the performance of various methods of freshwater input in coarse resolution regional models where the estuary cannot be explicitly resolved, and present a formulation that attempts to account for upstream flow in the salt wedge and in-estuary mixing that elevates salinity at the mouth.
Safety and Certification Considerations for Expanding the Use of UAS in Precision Agriculture
NASA Technical Reports Server (NTRS)
Hayhurst, Kelly J.; Maddalon, Jeffrey M.; Neogi, Natasha A.; Vertstynen, Harry A.
2016-01-01
The agricultural community is actively engaged in adopting new technologies such as unmanned aircraft systems (UAS) to help assess the condition of crops and develop appropriate treatment plans. In the United States, agricultural use of UAS has largely been limited to small UAS, generally weighing less than 55 lb and operating within the line of sight of a remote pilot. A variety of small UAS are being used to monitor and map crops, while only a few are being used to apply agricultural inputs based on the results of remote sensing. Larger UAS with substantial payload capacity could provide an option for site-specific application of agricultural inputs in a timely fashion, without substantive damage to the crops or soil. A recent study by the National Aeronautics and Space Administration (NASA) investigated certification requirements needed to enable the use of larger UAS to support the precision agriculture industry. This paper provides a brief introduction to aircraft certification relevant to agricultural UAS, an overview of and results from the NASA study, and a discussion of how those results might affect the precision agriculture community. Specific topics of interest include business model considerations for unmanned aerial applicators and a comparison with current means of variable rate application. The intent of the paper is to inform the precision agriculture community of evolving technologies that will enable broader use of unmanned vehicles to reduce costs, reduce environmental impacts, and enhance yield, especially for specialty crops that are grown on small to medium size farms.
Revisiting the Robustness of PET-Based Textural Features in the Context of Multi-Centric Trials.
Bailly, Clément; Bodet-Milin, Caroline; Couespel, Solène; Necib, Hatem; Kraeber-Bodéré, Françoise; Ansquer, Catherine; Carlier, Thomas
2016-01-01
This study aimed to investigate the variability of textural features (TF) as a function of acquisition and reconstruction parameters within the context of multi-centric trials. The robustness of 15 selected TFs were studied as a function of the number of iterations, the post-filtering level, input data noise, the reconstruction algorithm and the matrix size. A combination of several reconstruction and acquisition settings was devised to mimic multi-centric conditions. We retrospectively studied data from 26 patients enrolled in a diagnostic study that aimed to evaluate the performance of PET/CT 68Ga-DOTANOC in gastro-entero-pancreatic neuroendocrine tumors. Forty-one tumors were extracted and served as the database. The coefficient of variation (COV) or the absolute deviation (for the noise study) was derived and compared statistically with SUVmax and SUVmean results. The majority of investigated TFs can be used in a multi-centric context when each parameter is considered individually. The impact of voxel size and noise in the input data were predominant as only 4 TFs presented a high/intermediate robustness against SUV-based metrics (Entropy, Homogeneity, RP and ZP). When combining several reconstruction settings to mimic multi-centric conditions, most of the investigated TFs were robust enough against SUVmax except Correlation, Contrast, LGRE, LGZE and LZLGE. Considering previously published results on either reproducibility or sensitivity against delineation approach and our findings, it is feasible to consider Homogeneity, Entropy, Dissimilarity, HGRE, HGZE and ZP as relevant for being used in multi-centric trials.
Sedimentation in the chaparral: how do you handle unusual events?
Raymond M. Rice
1982-01-01
Abstract - Processes of erosion and sedimentation in steep chaparral drainage basins of southern California are described. The word ""hyperschedastic"" is coined to describe the sedimentation regime which is highly variable because of the interaction of marginally stable drainage basins, great variability in storm inputs, and the random occurrence...
A Longitudinal Study of Occupational Aspirations and Attainments of Iowa Young Adults.
ERIC Educational Resources Information Center
Yoesting, Dean R.; And Others
The causal linkage between socioeconomic status, occupational and educational aspiration, and attainment was examined in this attempt to test an existing theoretical model which used socioeconomic status as a major input variable, with significant other influence as a crucial intervening variable between socioeconomic status and aspiration. The…
NASA Astrophysics Data System (ADS)
Fioretti, Guido
2007-02-01
The productions function maps the inputs of a firm or a productive system onto its outputs. This article expounds generalizations of the production function that include state variables, organizational structures and increasing returns to scale. These extensions are needed in order to explain the regularities of the empirical distributions of certain economic variables.
Guo, Weixing; Langevin, C.D.
2002-01-01
This report documents a computer program (SEAWAT) that simulates variable-density, transient, ground-water flow in three dimensions. The source code for SEAWAT was developed by combining MODFLOW and MT3DMS into a single program that solves the coupled flow and solute-transport equations. The SEAWAT code follows a modular structure, and thus, new capabilities can be added with only minor modifications to the main program. SEAWAT reads and writes standard MODFLOW and MT3DMS data sets, although some extra input may be required for some SEAWAT simulations. This means that many of the existing pre- and post-processors can be used to create input data sets and analyze simulation results. Users familiar with MODFLOW and MT3DMS should have little difficulty applying SEAWAT to problems of variable-density ground-water flow.
Trends in Solidification Grain Size and Morphology for Additive Manufacturing of Ti-6Al-4V
NASA Astrophysics Data System (ADS)
Gockel, Joy; Sheridan, Luke; Narra, Sneha P.; Klingbeil, Nathan W.; Beuth, Jack
2017-12-01
Metal additive manufacturing (AM) is used for both prototyping and production of final parts. Therefore, there is a need to predict and control the microstructural size and morphology. Process mapping is an approach that represents AM process outcomes in terms of input variables. In this work, analytical, numerical, and experimental approaches are combined to provide a holistic view of trends in the solidification grain structure of Ti-6Al-4V across a wide range of AM process input variables. The thermal gradient is shown to vary significantly through the depth of the melt pool, which precludes development of fully equiaxed microstructure throughout the depth of the deposit within any practical range of AM process variables. A strategy for grain size control is demonstrated based on the relationship between melt pool size and grain size across multiple deposit geometries, and additional factors affecting grain size are discussed.
NASA Astrophysics Data System (ADS)
Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan
2015-10-01
Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ide, Toshiki; Hofmann, Holger F.; JST-CREST, Graduate School of Advanced Sciences of Matter, Hiroshima University, Kagamiyama 1-3-1, Higashi Hiroshima 739-8530
The information encoded in the polarization of a single photon can be transferred to a remote location by two-channel continuous-variable quantum teleportation. However, the finite entanglement used in the teleportation causes random changes in photon number. If more than one photon appears in the output, the continuous-variable teleportation accidentally produces clones of the original input photon. In this paper, we derive the polarization statistics of the N-photon output components and show that they can be decomposed into an optimal cloning term and completely unpolarized noise. We find that the accidental cloning of the input photon is nearly optimal at experimentallymore » feasible squeezing levels, indicating that the loss of polarization information is partially compensated by the availability of clones.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Haixia; Zhang, Jing
We propose a scheme for continuous-variable quantum cloning of coherent states with phase-conjugate input modes using linear optics. The quantum cloning machine yields M identical optimal clones from N replicas of a coherent state and N replicas of its phase conjugate. This scheme can be straightforwardly implemented with the setups accessible at present since its optical implementation only employs simple linear optical elements and homodyne detection. Compared with the original scheme for continuous-variable quantum cloning with phase-conjugate input modes proposed by Cerf and Iblisdir [Phys. Rev. Lett. 87, 247903 (2001)], which utilized a nondegenerate optical parametric amplifier, our scheme losesmore » the output of phase-conjugate clones and is regarded as irreversible quantum cloning.« less
Computer program for design analysis of radial-inflow turbines
NASA Technical Reports Server (NTRS)
Glassman, A. J.
1976-01-01
A computer program written in FORTRAN that may be used for the design analysis of radial-inflow turbines was documented. The following information is included: loss model (estimation of losses), the analysis equations, a description of the input and output data, the FORTRAN program listing and list of variables, and sample cases. The input design requirements include the power, mass flow rate, inlet temperature and pressure, and rotational speed. The program output data includes various diameters, efficiencies, temperatures, pressures, velocities, and flow angles for the appropriate calculation stations. The design variables include the stator-exit angle, rotor radius ratios, and rotor-exit tangential velocity distribution. The losses are determined by an internal loss model.
Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Kim, Jinwon; Krishna, Bhargavi
2015-08-31
The Alpha 2 release is the second release from the LASSO Pilot Phase that builds upon the Alpha 1 release. Alpha 2 contains additional diagnostics in the data bundles and focuses on cases from spring-summer 2016. A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input include model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.
Optimization of a GO2/GH2 Swirl Coaxial Injector Element
NASA Technical Reports Server (NTRS)
Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar
1999-01-01
An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.
LIFE CYCLE ASSESSMENT: AN INTERNATIONAL EXPERIENCE
Life Cycle Assessment (LCA) is used to evaluate environmental burdens associated with a product, process or activity by identifying and quantifying relevant inputs and outputs of the defined system and evaluating their potential impacts. This article outlines the four components ...
Approach for Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives
NASA Technical Reports Server (NTRS)
Putko, Michele M.; Newman, Perry A.; Taylor, Arthur C., III; Green, Lawrence L.
2001-01-01
This paper presents an implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for a quasi 1-D Euler CFD (computational fluid dynamics) code. Given uncertainties in statistically independent, random, normally distributed input variables, a first- and second-order statistical moment matching procedure is performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, the moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.
NASA Technical Reports Server (NTRS)
He, Yuning
2015-01-01
Safety of unmanned aerial systems (UAS) is paramount, but the large number of dynamically changing controller parameters makes it hard to determine if the system is currently stable, and the time before loss of control if not. We propose a hierarchical statistical model using Treed Gaussian Processes to predict (i) whether a flight will be stable (success) or become unstable (failure), (ii) the time-to-failure if unstable, and (iii) time series outputs for flight variables. We first classify the current flight input into success or failure types, and then use separate models for each class to predict the time-to-failure and time series outputs. As different inputs may cause failures at different times, we have to model variable length output curves. We use a basis representation for curves and learn the mappings from input to basis coefficients. We demonstrate the effectiveness of our prediction methods on a NASA neuro-adaptive flight control system.
Artificial neural network modeling of dissolved oxygen in the Heihe River, Northwestern China.
Wen, Xiaohu; Fang, Jing; Diao, Meina; Zhang, Chuanqi
2013-05-01
Identification and quantification of dissolved oxygen (DO) profiles of river is one of the primary concerns for water resources managers. In this research, an artificial neural network (ANN) was developed to simulate the DO concentrations in the Heihe River, Northwestern China. A three-layer back-propagation ANN was used with the Bayesian regularization training algorithm. The input variables of the neural network were pH, electrical conductivity, chloride (Cl(-)), calcium (Ca(2+)), total alkalinity, total hardness, nitrate nitrogen (NO3-N), and ammonical nitrogen (NH4-N). The ANN structure with 14 hidden neurons obtained the best selection. By making comparison between the results of the ANN model and the measured data on the basis of correlation coefficient (r) and root mean square error (RMSE), a good model-fitting DO values indicated the effectiveness of neural network model. It is found that the coefficient of correlation (r) values for the training, validation, and test sets were 0.9654, 0.9841, and 0.9680, respectively, and the respective values of RMSE for the training, validation, and test sets were 0.4272, 0.3667, and 0.4570, respectively. Sensitivity analysis was used to determine the influence of input variables on the dependent variable. The most effective inputs were determined as pH, NO3-N, NH4-N, and Ca(2+). Cl(-) was found to be least effective variables on the proposed model. The identified ANN model can be used to simulate the water quality parameters.
Unsteady Aerodynamic Testing Using the Dynamic Plunge Pitch and Roll Model Mount
NASA Technical Reports Server (NTRS)
Lutze, Frederick H.; Fan, Yigang
1999-01-01
A final report on the DyPPiR tests that were run are presented. Essentially it consists of two parts, a description of the data reduction techniques and the results. The data reduction techniques include three methods that were considered: 1) signal processing of wind on - wind off data; 2) using wind on data in conjunction with accelerometer measurements; and 3) using a dynamic model of the sting to predict the sting oscillations and determining the aerodynamic inputs using an optimization process. After trying all three, we ended up using method 1, mainly because of its simplicity and our confidence in its accuracy. The results section consists of time history plots of the input variables (angle of attack, roll angle, and/or plunge position) and the corresponding time histories of the output variables, C(sub L), C(sub D), C(sub m), C(sub l), C(sub m), C(sub n). Also included are some phase plots of one or more of the output variable vs. an input variable. Typically of interest are pitch moment coefficient vs. angle of attack for an oscillatory motion where the hysteresis loops can be observed. These plots are useful to determine the "more interesting" cases. Samples of the data as it appears on the disk are presented at the end of the report. The last maneuver, a rolling pull up, is indicative of the unique capabilities of the DyPPiR, allowing combinations of motions to be exercised at the same time.
QKD Via a Quantum Wavelength Router Using Spatial Soliton
NASA Astrophysics Data System (ADS)
Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.
2011-05-01
A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.
The National Cancer Institute (NCI) is seeking input from the community on identifying priorities with regards to supporting innovative technology development for cancer-relevant research. While the NCI provides support for technology development through a variety of mechanisms, it is important to understand whether or not these are sufficient for catalyzing and supporting the development of tools with significant potential for advancing important fields of cancer research or clinical care.
Determination of Altitude Sickness Risk (DASR) User’s Guide for Apple Mobile Devices
2015-06-01
Android Smartphone Relevance to Military Weather Applications”.2 2. DASR Inputs To launch DASR, simply tap the DASR icon on the device start screen...risk due to sleep deprivation). TB MED 505 should be consulted for additional details if necessary. 5 Fig. 4 Individual Factors view The risk...armypubs.army.mil/med/index.html]. 2. Sauter, D. Android smartphone relevance to military weather applications. White Sands Missile Range (NM); Army Research
Basic primitives for molecular diagram sketching
2010-01-01
A collection of primitive operations for molecular diagram sketching has been developed. These primitives compose a concise set of operations which can be used to construct publication-quality 2 D coordinates for molecular structures using a bare minimum of input bandwidth. The input requirements for each primitive consist of a small number of discrete choices, which means that these primitives can be used to form the basis of a user interface which does not require an accurate pointing device. This is particularly relevant to software designed for contemporary mobile platforms. The reduction of input bandwidth is accomplished by using algorithmic methods for anticipating probable geometries during the sketching process, and by intelligent use of template grafting. The algorithms and their uses are described in detail. PMID:20923555
The impact of 14nm photomask variability and uncertainty on computational lithography solutions
NASA Astrophysics Data System (ADS)
Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian
2013-09-01
Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.
NASA Astrophysics Data System (ADS)
Kamynin, V. L.; Bukharova, T. I.
2017-01-01
We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.
Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident
NASA Astrophysics Data System (ADS)
Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien
2014-05-01
Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.
Forecasting of cyanobacterial density in Torrão reservoir using artificial neural networks.
Torres, Rita; Pereira, Elisa; Vasconcelos, Vítor; Teles, Luís Oliva
2011-06-01
The ability of general regression neural networks (GRNN) to forecast the density of cyanobacteria in the Torrão reservoir (Tâmega river, Portugal), in a period of 15 days, based on three years of collected physical and chemical data, was assessed. Several models were developed and 176 were selected based on their correlation values for the verification series. A time lag of 11 was used, equivalent to one sample (periods of 15 days in the summer and 30 days in the winter). Several combinations of the series were used. Input and output data collected from three depths of the reservoir were applied (surface, euphotic zone limit and bottom). The model that presented a higher average correlation value presented the correlations 0.991; 0.843; 0.978 for training, verification and test series. This model had the three series independent in time: first test series, then verification series and, finally, training series. Only six input variables were considered significant to the performance of this model: ammonia, phosphates, dissolved oxygen, water temperature, pH and water evaporation, physical and chemical parameters referring to the three depths of the reservoir. These variables are common to the next four best models produced and, although these included other input variables, their performance was not better than the selected best model.
Poisson process stimulation of an excitable membrane cable model.
Goldfinger, M D
1986-01-01
The convergence of multiple inputs within a single-neuronal substrate is a common design feature of both peripheral and central nervous systems. Typically, the result of such convergence impinges upon an intracellularly contiguous axon, where it is encoded into a train of action potentials. The simplest representation of the result of convergence of multiple inputs is a Poisson process; a general representation of axonal excitability is the Hodgkin-Huxley/cable theory formalism. The present work addressed multiple input convergence upon an axon by applying Poisson process stimulation to the Hodgkin-Huxley axonal cable. The results showed that both absolute and relative refractory periods yielded in the axonal output a random but non-Poisson process. While smaller amplitude stimuli elicited a type of short-interval conditioning, larger amplitude stimuli elicited impulse trains approaching Poisson criteria except for the effects of refractoriness. These results were obtained for stimulus trains consisting of pulses of constant amplitude and constant or variable durations. By contrast, with or without stimulus pulse shape variability, the post-impulse conditional probability for impulse initiation in the steady-state was a Poisson-like process. For stimulus variability consisting of randomly smaller amplitudes or randomly longer durations, mean impulse frequency was attenuated or potentiated, respectively. Limitations and implications of these computations are discussed. PMID:3730505
Constructing general partial differential equations using polynomial and neural networks.
Zjavka, Ladislav; Pedrycz, Witold
2016-01-01
Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
Fuzzy set approach to quality function deployment: An investigation
NASA Technical Reports Server (NTRS)
Masud, Abu S. M.
1992-01-01
The final report of the 1992 NASA/ASEE Summer Faculty Fellowship at the Space Exploration Initiative Office (SEIO) in Langley Research Center is presented. Quality Function Deployment (QFD) is a process, focused on facilitating the integration of the customer's voice in the design and development of a product or service. Various input, in the form of judgements and evaluations, are required during the QFD analyses. All the input variables in these analyses are treated as numeric variables. The purpose of the research was to investigate how QFD analyses can be performed when some or all of the input variables are treated as linguistic variables with values expressed as fuzzy numbers. The reason for this consideration is that human judgement, perception, and cognition are often ambiguous and are better represented as fuzzy numbers. Two approaches for using fuzzy sets in QFD have been proposed. In both cases, all the input variables are considered as linguistic variables with values indicated as linguistic expressions. These expressions are then converted to fuzzy numbers. The difference between the two approaches is due to how the QFD computations are performed with these fuzzy numbers. In Approach 1, the fuzzy numbers are first converted to their equivalent crisp scores and then the QFD computations are performed using these crisp scores. As a result, the output of this approach are crisp numbers, similar to those in traditional QFD. In Approach 2, all the QFD computations are performed with the fuzzy numbers and the output are fuzzy numbers also. Both the approaches have been explained with the help of illustrative examples of QFD application. Approach 2 has also been applied in a QFD application exercise in SEIO, involving a 'mini moon rover' design. The mini moon rover is a proposed tele-operated vehicle that will traverse and perform various tasks, including autonomous operations, on the moon surface. The output of the moon rover application exercise is a ranking of the rover functions so that a subset of these functions can be targeted for design improvement. The illustrative examples and the mini rover application exercise confirm that the proposed approaches for using fuzzy sets in QFD are viable. However, further research is needed to study the various issues involved and to verify/validate the methods proposed.
NASA Astrophysics Data System (ADS)
Felkner, John Sames
The scale and extent of global land use change is massive, and has potentially powerful effects on the global climate and global atmospheric composition (Turner & Meyer, 1994). Because of this tremendous change and impact, there is an urgent need for quantitative, empirical models of land use change, especially predictive models with an ability to capture the trajectories of change (Agarwal, Green, Grove, Evans, & Schweik, 2000; Lambin et al., 1999). For this research, a spatial statistical predictive model of land use change was created and run in two provinces of Thailand. The model utilized an extensive spatial database, and used a classification tree approach for explanatory model creation and future land use (Breiman, Friedman, Olshen, & Stone, 1984). Eight input variables were used, and the trees were run on a dependent variable of land use change measured from 1979 to 1989 using classified satellite imagery. The derived tree models were used to create probability of change surfaces, and these were then used to create predicted land cover maps for 1999. These predicted 1999 maps were compared with actual 1999 landcover derived from 1999 Landsat 7 imagery. The primary research hypothesis was that an explanatory model using both economic and environmental input variables would better predict future land use change than would either a model using only economic variables or a model using only environmental. Thus, the eight input variables included four economic and four environmental variables. The results indicated a very slight superiority of the full models to predict future agricultural change and future deforestation, but a slight superiority of the economic models to predict future built change. However, the margins of superiority were too small to be statistically significant. The resulting tree structures were used, however, to derive a series of principles or "rules" governing land use change in both provinces. The model was able to predict future land use, given a series of assumptions, with 90 percent overall accuracies. The model can be used in other developing or developed country locations for future land use prediction, determination of future threatened areas, or to derive "rules" or principles driving land use change.
Hülsheger, Ute R; Anderson, Neil; Salgado, Jesus F
2009-09-01
This article presents a meta-analysis of team-level antecedents of creativity and innovation in the workplace. Using a general input-process-output model, the authors examined 15 team-level variables researched in primary studies published over the last 30 years and their relation to creativity and innovation. An exhaustive search of the international innovation literature resulted in a final sample (k) of 104 independent studies. Results revealed that team process variables of support for innovation, vision, task orientation, and external communication displayed the strongest relationships with creativity and innovation (rhos between 0.4 and 0.5). Input variables (i.e., team composition and structure) showed weaker effect sizes. Moderator analyses confirmed that relationships differ substantially depending on measurement method (self-ratings vs. independent ratings of innovation) and measurement level (individual vs. team innovation). Team variables displayed considerably stronger relationships with self-report measures of innovation compared with independent ratings and objective criteria. Team process variables were more strongly related to creativity and innovation measured at the team than the individual level. Implications for future research and pragmatic ramifications for organizational practice are discussed in conclusion.
Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie; Chiocchio, François
2018-06-01
Using a structural analysis, this study examines the relationship between job satisfaction among 315 mental health professionals from the province of Quebec (Canada) and a wide range of variables related to provider characteristics, team characteristics, processes, and emergent states, and organizational culture. We used the Job Satisfaction Survey to assess job satisfaction. Our conceptual framework integrated numerous independent variables adapted from the input-mediator-output-input (IMOI) model and the Integrated Team Effectiveness Model (ITEM). The structural equation model predicted 47% of the variance of job satisfaction. Job satisfaction was associated with eight variables: strong team support, participation in the decision-making process, closer collaboration, fewer conflicts among team members, modest knowledge production (team processes), firm affective commitment, multifocal identification (emergent states) and belonging to the nursing profession (provider characteristics). Team climate had an impact on six job satisfaction variables (team support, knowledge production, conflicts, affective commitment, collaboration, and multifocal identification). Results show that team processes and emergent states were mediators between job satisfaction and team climate. To increase job satisfaction among professionals, health managers need to pursue strategies that foster a positive climate within mental health teams.
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
Influence of estuarine processes on spatiotemporal variation in bioavailable selenium
Stewart, Robin; Luoma, Samuel N.; Elrick, Kent A.; Carter, James L.; van der Wegen, Mick
2013-01-01
Dynamic processes (physical, chemical and biological) challenge our ability to quantify and manage the ecological risk of chemical contaminants in estuarine environments. Selenium (Se) bioavailability (defined by bioaccumulation), stable isotopes and molar carbon-tonitrogen ratios in the benthic clam Potamocorbula amurensis, an important food source for predators, were determined monthly for 17 yr in northern San Francisco Bay. Se concentrations in the clams ranged from a low of 2 to a high of 22 μg g-1 over space and time. Little of that variability was stochastic, however. Statistical analyses and preliminary hydrodynamic modeling showed that a constant mid-estuarine input of Se, which was dispersed up- and down-estuary by tidal currents, explained the general spatial patterns in accumulated Se among stations. Regression of Se bioavailability against river inflows suggested that processes driven by inflows were the primary driver of seasonal variability. River inflow also appeared to explain interannual variability but within the range of Se enrichment established at each station by source inputs. Evaluation of risks from Se contamination in estuaries requires the consideration of spatial and temporal variability on multiple scales and of the processes that drive that variability.
van der Zijden, A M; Groen, B E; Tanck, E; Nienhuis, B; Verdonschot, N; Weerdesteyn, V
2017-03-21
Many research groups have studied fall impact mechanics to understand how fall severity can be reduced to prevent hip fractures. Yet, direct impact force measurements with force plates are restricted to a very limited repertoire of experimental falls. The purpose of this study was to develop a generic model for estimating hip impact forces (i.e. fall severity) in in vivo sideways falls without the use of force plates. Twelve experienced judokas performed sideways Martial Arts (MA) and Block ('natural') falls on a force plate, both with and without a mat on top. Data were analyzed to determine the hip impact force and to derive 11 selected (subject-specific and kinematic) variables. Falls from kneeling height were used to perform a stepwise regression procedure to assess the effects of these input variables and build the model. The final model includes four input variables, involving one subject-specific measure and three kinematic variables: maximum upper body deceleration, body mass, shoulder angle at the instant of 'maximum impact' and maximum hip deceleration. The results showed that estimated and measured hip impact forces were linearly related (explained variances ranging from 46 to 63%). Hip impact forces of MA falls onto the mat from a standing position (3650±916N) estimated by the final model were comparable with measured values (3698±689N), even though these data were not used for training the model. In conclusion, a generic linear regression model was developed that enables the assessment of fall severity through kinematic measures of sideways falls, without using force plates. Copyright © 2017 Elsevier Ltd. All rights reserved.
[Multivariate Adaptive Regression Splines (MARS), an alternative for the analysis of time series].
Vanegas, Jairo; Vásquez, Fabián
Multivariate Adaptive Regression Splines (MARS) is a non-parametric modelling method that extends the linear model, incorporating nonlinearities and interactions between variables. It is a flexible tool that automates the construction of predictive models: selecting relevant variables, transforming the predictor variables, processing missing values and preventing overshooting using a self-test. It is also able to predict, taking into account structural factors that might influence the outcome variable, thereby generating hypothetical models. The end result could identify relevant cut-off points in data series. It is rarely used in health, so it is proposed as a tool for the evaluation of relevant public health indicators. For demonstrative purposes, data series regarding the mortality of children under 5 years of age in Costa Rica were used, comprising the period 1978-2008. Copyright © 2016 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.
Harvey, Catherine; Stanton, Neville A; Pickering, Carl A; McDonald, Mike; Zheng, Pengjun
2011-07-01
In-vehicle information systems (IVIS) can be controlled by the user via direct or indirect input devices. In order to develop the next generation of usable IVIS, designers need to be able to evaluate and understand the usability issues associated with these two input types. The aim of this study was to investigate the effectiveness of a set of empirical usability evaluation methods for identifying important usability issues and distinguishing between the IVIS input devices. A number of usability issues were identified and their causal factors have been explored. These were related to the input type, the structure of the menu/tasks and hardware issues. In particular, the translation between inputs and on-screen actions and a lack of visual feedback for menu navigation resulted in lower levels of usability for the indirect device. This information will be useful in informing the design of new IVIS, with improved usability. STATEMENT OF RELEVANCE: This paper examines the use of empirical methods for distinguishing between direct and indirect IVIS input devices and identifying usability issues. Results have shown that the characteristics of indirect input devices produce more serious usability issues, compared with direct devices and can have a negative effect on the driver-vehicle interaction.
García-de-León-Chocano, Ricardo; Muñoz-Soler, Verónica; Sáez, Carlos; García-de-León-González, Ricardo; García-Gómez, Juan M
2016-04-01
This is the second in a series of two papers regarding the construction of data quality (DQ) assured repositories, based on population data from Electronic Health Records (EHR), for the reuse of information on infant feeding from birth until the age of two. This second paper describes the application of the computational process of constructing the first quality-assured repository for the reuse of information on infant feeding in the perinatal period, with the aim of studying relevant questions from the Baby Friendly Hospital Initiative (BFHI) and monitoring its deployment in our hospital. The construction of the repository was carried out using 13 semi-automated procedures to assess, recover or discard clinical data. The initial information consisted of perinatal forms from EHR related to 2048 births (Facts of Study, FoS) between 2009 and 2011, with a total of 433,308 observations of 223 variables. DQ was measured before and after the procedures using metrics related to eight quality dimensions: predictive value, correctness, duplication, consistency, completeness, contextualization, temporal-stability, and spatial-stability. Once the predictive variables were selected and DQ was assured, the final repository consisted of 1925 births, 107,529 observations and 73 quality-assured variables. The amount of discarded observations mainly corresponds to observations of non-predictive variables (52.90%) and the impact of the de-duplication process (20.58%) with respect to the total input data. Seven out of thirteen procedures achieved 100% of valid births, observations and variables. Moreover, 89% of births and ~98% of observations were consistent according to the experts׳ criteria. A multidisciplinary approach along with the quantification of DQ has allowed us to construct the first repository about infant feeding in the perinatal period based on EHR population data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Hydrological Modelling the Middle Magdalena Valley (Colombia)
NASA Astrophysics Data System (ADS)
Arenas, M. C.; Duque, N.; Arboleda, P.; Guadagnini, A.; Riva, M.; Donado-Garzon, L. D.
2017-12-01
Hydrological distributed modeling is key point for a comprehensive assessment of the feedback between the dynamics of the hydrological cycle, climate conditions and land use. Such modeling results are markedly relevant in the fields of water resources management, natural hazards and oil and gas industry. Here, we employ TopModel (TOPography based hydrological MODEL) for the hydrological modeling of an area in the Middle Magdalena Valley (MMV), a tropical basin located in Colombia. This study is located over the intertropical convergence zone and is characterized by special meteorological conditions, with fast water fluxes over the year. It has been subject to significant land use changes, as a result of intense economical activities, i.e., and agriculture, energy and oil & gas production. The model employees a record of 12 years of daily precipitation and evapotranspiration data as inputs. Streamflow data monitored across the same time frame are used for model calibration. The latter is performed by considering data from 2000 to 2008. Model validation then relies on observations from 2009 to 2012. The robustness of our analyses is based on the Nash-Sutcliffe coefficient (values of this metric being 0.62 and 0.53, respectively for model calibration and validation). Our results reveal high water storage capacity in the soil, and a marked subsurface runoff, consistent with the characteristics of the soil types in the regions. A significant influence on runoff response of the basin to topographical factors represented in the model is evidenced. Our calibrated model provides relevant indications about recharge in the region, which is important to quantify the interaction between surface water and groundwater, specially during the dry season, which is more relevant in climate-change and climate-variability scenarios.
New control concepts for uncertain water resources systems: 1. Theory
NASA Astrophysics Data System (ADS)
Georgakakos, Aris P.; Yao, Huaming
1993-06-01
A major complicating factor in water resources systems management is handling unknown inputs. Stochastic optimization provides a sound mathematical framework but requires that enough data exist to develop statistical input representations. In cases where data records are insufficient (e.g., extreme events) or atypical of future input realizations, stochastic methods are inadequate. This article presents a control approach where input variables are only expected to belong in certain sets. The objective is to determine sets of admissible control actions guaranteeing that the system will remain within desirable bounds. The solution is based on dynamic programming and derived for the case where all sets are convex polyhedra. A companion paper (Yao and Georgakakos, this issue) addresses specific applications and problems in relation to reservoir system management.
Dynamic Time Expansion and Compression Using Nonlinear Waveguides
Findikoglu, Alp T.; Hahn, Sangkoo F.; Jia, Quanxi
2004-06-22
Dynamic time expansion or compression of a small amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Dynamic time expansion and compression using nonlinear waveguides
Findikoglu, Alp T [Los Alamos, NM; Hahn, Sangkoo F [Los Alamos, NM; Jia, Quanxi [Los Alamos, NM
2004-06-22
Dynamic time expansion or compression of a small-amplitude input signal generated with an initial scale is performed using a nonlinear waveguide. A nonlinear waveguide having a variable refractive index is connected to a bias voltage source having a bias signal amplitude that is large relative to the input signal to vary the reflective index and concomitant speed of propagation of the nonlinear waveguide and an electrical circuit for applying the small-amplitude signal and the large amplitude bias signal simultaneously to the nonlinear waveguide. The large amplitude bias signal with the input signal alters the speed of propagation of the small-amplitude signal with time in the nonlinear waveguide to expand or contract the initial time scale of the small-amplitude input signal.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Zippo, Antonio G.; Biella, Gabriele E. M.
2015-01-01
Current developments in neuronal physiology are unveiling novel roles for dendrites. Experiments have shown mechanisms of non-linear synaptic NMDA dependent activations, able to discriminate input patterns through the waveforms of the excitatory postsynaptic potentials. Contextually, the synaptic clustering of inputs is the principal cellular strategy to separate groups of common correlated inputs. Dendritic branches appear to work as independent discriminating units of inputs potentially reflecting an extraordinary repertoire of pattern memories. However, it is unclear how these observations could impact our comprehension of the structural correlates of memory at the cellular level. This work investigates the discrimination capabilities of neurons through computational biophysical models to extract a predicting law for the dendritic input discrimination capability (M). By this rule we compared neurons from a neuron reconstruction repository (neuromorpho.org). Comparisons showed that primate neurons were not supported by an equivalent M preeminence and that M is not uniformly distributed among neuron types. Remarkably, neocortical neurons had substantially less memory capacity in comparison to those from non-cortical regions. In conclusion, the proposed rule predicts the inherent neuronal spatial memory gathering potentially relevant anatomical and evolutionary considerations about the brain cytoarchitecture. PMID:26100354
High frequency inductive lamp and power oscillator
Kirkpatrick, Douglas A.; Gitsevich, Aleksandr
2005-09-27
An oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and a tuning circuit connected to the input of the amplifier, wherein the tuning circuit is continuously variable and consists of solid state electrical components with no mechanically adjustable devices including a pair of diodes connected to each other at their respective cathodes with a control voltage connected at the junction of the diodes. Another oscillator includes an amplifier having an input and an output, a feedback network connected between the input of the amplifier and the output of the amplifier, the feedback network being configured to provide suitable positive feedback from the output of the amplifier to the input of the amplifier to initiate and sustain an oscillating condition, and transmission lines connected to the input of the amplifier with an input pad and a perpendicular transmission line extending from the input pad and forming a leg of a resonant "T", and wherein the feedback network is coupled to the leg of the resonant "T".
ERIC Educational Resources Information Center
Mooij, Ton
2015-01-01
Teachers conceptualise and interpret violent behaviour of secondary students in different ways. They also differ in their estimates of the relevance of student and contextual school variables when explaining the severity of violence experienced by students. Research can assist here by explicating the role of different types of contextual school…
ERIC Educational Resources Information Center
Lynn, Richard; Vanhanen, Tatu
2012-01-01
This paper summarizes the results of 244 correlates of national IQs that have been published from 2002 through 2012 and include educational attainment, cognitive output, educational input, per capita income, economic growth, other economic variables, crime, political institutions, health, fertility, sociological variables, and geographic and…
Crown fuel spatial variability and predictability of fire spread
Russell A. Parsons; Jeremy Sauer; Rodman R. Linn
2010-01-01
Fire behavior predictions, as well as measures of uncertainty in those predictions, are essential in operational and strategic fire management decisions. While it is becoming common practice to assess uncertainty in fire behavior predictions arising from variability in weather inputs, uncertainty arising from the fire models themselves is difficult to assess. This is...
ERIC Educational Resources Information Center
Qian, Manman; Chukharev-Hudilainen, Evgeny; Levis, John
2018-01-01
Many types of L2 phonological perception are often difficult to acquire without instruction. These difficulties with perception may also be related to intelligibility in production. Instruction on perception contrasts is more likely to be successful with the use of phonetically variable input made available through computer-assisted pronunciation…
RAWS II: A MULTIPLE REGRESSION ANALYSIS PROGRAM,
This memorandum gives instructions for the use and operation of a revised version of RAWS, a multiple regression analysis program. The program...of preprocessed data, the directed retention of variable, listing of the matrix of the normal equations and its inverse, and the bypassing of the regression analysis to provide the input variable statistics only. (Author)
Vegetation and environmental controls on soil respiration in a pinon-juniper woodland
Sandra A. White
2008-01-01
Soil respiration (RS) responds to changes in plant and microbial activity and environmental conditions. In arid ecosystems of the southwestern USA, soil moisture exhibits large fluctuations because annual and seasonal precipitation inputs are highly variable, with increased variability expected in the future. Patterns of soil moisture, and periodic severe drought, are...
Space sickness predictors suggest fluid shift involvement and possible countermeasures
NASA Technical Reports Server (NTRS)
Simanonok, K. E.; Moseley, E. C.; Charles, J. B.
1992-01-01
Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.
Value of Construction Company and its Dependence on Significant Variables
NASA Astrophysics Data System (ADS)
Vítková, E.; Hromádka, V.; Ondrušková, E.
2017-10-01
The paper deals with the value of the construction company assessment respecting usable approaches and determinable variables. The reasons of the value of the construction company assessment are different, but the most important reasons are the sale or the purchase of the company, the liquidation of the company, the fusion of the company with another subject or the others. According the reason of the value assessment it is possible to determine theoretically different approaches for valuation, mainly it concerns about the yield method of valuation and the proprietary method of valuation. Both approaches are dependant of detailed input variables, which quality will influence the final assessment of the company´s value. The main objective of the paper is to suggest, according to the analysis, possible ways of input variables, mainly in the form of expected cash-flows or the profit, determination. The paper is focused mainly on methods of time series analysis, regression analysis and mathematical simulation utilization. As the output, the results of the analysis on the case study will be demonstrated.
Electrical resistivity tomography to delineate greenhouse soil variability
NASA Astrophysics Data System (ADS)
Rossi, R.; Amato, M.; Bitella, G.; Bochicchio, R.
2013-03-01
Appropriate management of soil spatial variability is an important tool for optimizing farming inputs, with the result of yield increase and reduction of the environmental impact in field crops. Under greenhouses, several factors such as non-uniform irrigation and localized soil compaction can severely affect yield and quality. Additionally, if soil spatial variability is not taken into account, yield deficiencies are often compensated by extra-volumes of crop inputs; as a result, over-irrigation and overfertilization in some parts of the field may occur. Technology for spatially sound management of greenhouse crops is therefore needed to increase yield and quality and to address sustainability. In this experiment, 2D-electrical resistivity tomography was used as an exploratory tool to characterize greenhouse soil variability and its relations to wild rocket yield. Soil resistivity well matched biomass variation (R2=0.70), and was linked to differences in soil bulk density (R2=0.90), and clay content (R2=0.77). Electrical resistivity tomography shows a great potential in horticulture where there is a growing demand of sustainability coupled with the necessity of stabilizing yield and product quality.
Robustness-Based Design Optimization Under Data Uncertainty
NASA Technical Reports Server (NTRS)
Zaman, Kais; McDonald, Mark; Mahadevan, Sankaran; Green, Lawrence
2010-01-01
This paper proposes formulations and algorithms for design optimization under both aleatory (i.e., natural or physical variability) and epistemic uncertainty (i.e., imprecise probabilistic information), from the perspective of system robustness. The proposed formulations deal with epistemic uncertainty arising from both sparse and interval data without any assumption about the probability distributions of the random variables. A decoupled approach is proposed in this paper to un-nest the robustness-based design from the analysis of non-design epistemic variables to achieve computational efficiency. The proposed methods are illustrated for the upper stage design problem of a two-stage-to-orbit (TSTO) vehicle, where the information on the random design inputs are only available as sparse point and/or interval data. As collecting more data reduces uncertainty but increases cost, the effect of sample size on the optimality and robustness of the solution is also studied. A method is developed to determine the optimal sample size for sparse point data that leads to the solutions of the design problem that are least sensitive to variations in the input random variables.
Some Insights of Spectral Optimization in Ocean Color Inversion
NASA Technical Reports Server (NTRS)
Lee, Zhongping; Franz, Bryan; Shang, Shaoling; Dong, Qiang; Arnone, Robert
2011-01-01
In the past decades various algorithms have been developed for the retrieval of water constituents from the measurement of ocean color radiometry, and one of the approaches is spectral optimization. This approach defines an error target (or error function) between the input remote sensing reflectance and the output remote sensing reflectance, with the latter modeled with a few variables that represent the optically active properties (such as the absorption coefficient of phytoplankton and the backscattering coefficient of particles). The values of the variables when the error reach a minimum (optimization is achieved) are considered the properties that form the input remote sensing reflectance; or in other words, the equations are solved numerically. The applications of this approach implicitly assume that the error is a monotonic function of the various variables. Here, with data from numerical simulation and field measurements, we show the shape of the error surface, in a way to justify the possibility of finding a solution of the various variables. In addition, because the spectral properties could be modeled differently, impacts of such differences on the error surface as well as on the retrievals are also presented.