Sample records for important input variables

  1. Influential input classification in probabilistic multimedia models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maddalena, Randy L.; McKone, Thomas E.; Hsieh, Dennis P.H.

    1999-05-01

    Monte Carlo analysis is a statistical simulation method that is often used to assess and quantify the outcome variance in complex environmental fate and effects models. Total outcome variance of these models is a function of (1) the uncertainty and/or variability associated with each model input and (2) the sensitivity of the model outcome to changes in the inputs. To propagate variance through a model using Monte Carlo techniques, each variable must be assigned a probability distribution. The validity of these distributions directly influences the accuracy and reliability of the model outcome. To efficiently allocate resources for constructing distributions onemore » should first identify the most influential set of variables in the model. Although existing sensitivity and uncertainty analysis methods can provide a relative ranking of the importance of model inputs, they fail to identify the minimum set of stochastic inputs necessary to sufficiently characterize the outcome variance. In this paper, we describe and demonstrate a novel sensitivity/uncertainty analysis method for assessing the importance of each variable in a multimedia environmental fate model. Our analyses show that for a given scenario, a relatively small number of input variables influence the central tendency of the model and an even smaller set determines the shape of the outcome distribution. For each input, the level of influence depends on the scenario under consideration. This information is useful for developing site specific models and improving our understanding of the processes that have the greatest influence on the variance in outcomes from multimedia models.« less

  2. Input variable selection and calibration data selection for storm water quality regression models.

    PubMed

    Sun, Siao; Bertrand-Krajewski, Jean-Luc

    2013-01-01

    Storm water quality models are useful tools in storm water management. Interest has been growing in analyzing existing data for developing models for urban storm water quality evaluations. It is important to select appropriate model inputs when many candidate explanatory variables are available. Model calibration and verification are essential steps in any storm water quality modeling. This study investigates input variable selection and calibration data selection in storm water quality regression models. The two selection problems are mutually interacted. A procedure is developed in order to fulfil the two selection tasks in order. The procedure firstly selects model input variables using a cross validation method. An appropriate number of variables are identified as model inputs to ensure that a model is neither overfitted nor underfitted. Based on the model input selection results, calibration data selection is studied. Uncertainty of model performances due to calibration data selection is investigated with a random selection method. An approach using the cluster method is applied in order to enhance model calibration practice based on the principle of selecting representative data for calibration. The comparison between results from the cluster selection method and random selection shows that the former can significantly improve performances of calibrated models. It is found that the information content in calibration data is important in addition to the size of calibration data.

  3. Variance-based interaction index measuring heteroscedasticity

    NASA Astrophysics Data System (ADS)

    Ito, Keiichi; Couckuyt, Ivo; Poles, Silvia; Dhaene, Tom

    2016-06-01

    This work is motivated by the need to deal with models with high-dimensional input spaces of real variables. One way to tackle high-dimensional problems is to identify interaction or non-interaction among input parameters. We propose a new variance-based sensitivity interaction index that can detect and quantify interactions among the input variables of mathematical functions and computer simulations. The computation is very similar to first-order sensitivity indices by Sobol'. The proposed interaction index can quantify the relative importance of input variables in interaction. Furthermore, detection of non-interaction for screening can be done with as low as 4 n + 2 function evaluations, where n is the number of input variables. Using the interaction indices based on heteroscedasticity, the original function may be decomposed into a set of lower dimensional functions which may then be analyzed separately.

  4. Artificial Neural Network and Genetic Algorithm Hybrid Intelligence for Predicting Thai Stock Price Index Trend

    PubMed Central

    Boonjing, Veera; Intakosum, Sarun

    2016-01-01

    This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span. PMID:27974883

  5. Artificial Neural Network and Genetic Algorithm Hybrid Intelligence for Predicting Thai Stock Price Index Trend.

    PubMed

    Inthachot, Montri; Boonjing, Veera; Intakosum, Sarun

    2016-01-01

    This study investigated the use of Artificial Neural Network (ANN) and Genetic Algorithm (GA) for prediction of Thailand's SET50 index trend. ANN is a widely accepted machine learning method that uses past data to predict future trend, while GA is an algorithm that can find better subsets of input variables for importing into ANN, hence enabling more accurate prediction by its efficient feature selection. The imported data were chosen technical indicators highly regarded by stock analysts, each represented by 4 input variables that were based on past time spans of 4 different lengths: 3-, 5-, 10-, and 15-day spans before the day of prediction. This import undertaking generated a big set of diverse input variables with an exponentially higher number of possible subsets that GA culled down to a manageable number of more effective ones. SET50 index data of the past 6 years, from 2009 to 2014, were used to evaluate this hybrid intelligence prediction accuracy, and the hybrid's prediction results were found to be more accurate than those made by a method using only one input variable for one fixed length of past time span.

  6. A new interpretation and validation of variance based importance measures for models with correlated inputs

    NASA Astrophysics Data System (ADS)

    Hao, Wenrui; Lu, Zhenzhou; Li, Luyi

    2013-05-01

    In order to explore the contributions by correlated input variables to the variance of the output, a novel interpretation framework of importance measure indices is proposed for a model with correlated inputs, which includes the indices of the total correlated contribution and the total uncorrelated contribution. The proposed indices accurately describe the connotations of the contributions by the correlated input to the variance of output, and they can be viewed as the complement and correction of the interpretation about the contributions by the correlated inputs presented in "Estimation of global sensitivity indices for models with dependent variables, Computer Physics Communications, 183 (2012) 937-946". Both of them contain the independent contribution by an individual input. Taking the general form of quadratic polynomial as an illustration, the total correlated contribution and the independent contribution by an individual input are derived analytically, from which the components and their origins of both contributions of correlated input can be clarified without any ambiguity. In the special case that no square term is included in the quadratic polynomial model, the total correlated contribution by the input can be further decomposed into the variance contribution related to the correlation of the input with other inputs and the independent contribution by the input itself, and the total uncorrelated contribution can be further decomposed into the independent part by interaction between the input and others and the independent part by the input itself. Numerical examples are employed and their results demonstrate that the derived analytical expressions of the variance-based importance measure are correct, and the clarification of the correlated input contribution to model output by the analytical derivation is very important for expanding the theory and solutions of uncorrelated input to those of the correlated one.

  7. Sensitivity and uncertainty of input sensor accuracy for grass-based reference evapotranspiration

    USDA-ARS?s Scientific Manuscript database

    Quantification of evapotranspiration (ET) in agricultural environments is becoming of increasing importance throughout the world, thus understanding input variability of relevant sensors is of paramount importance as well. The Colorado Agricultural and Meteorological Network (CoAgMet) and the Florid...

  8. Analytic uncertainty and sensitivity analysis of models with input correlations

    NASA Astrophysics Data System (ADS)

    Zhu, Yueying; Wang, Qiuping A.; Li, Wei; Cai, Xu

    2018-03-01

    Probabilistic uncertainty analysis is a common means of evaluating mathematical models. In mathematical modeling, the uncertainty in input variables is specified through distribution laws. Its contribution to the uncertainty in model response is usually analyzed by assuming that input variables are independent of each other. However, correlated parameters are often happened in practical applications. In the present paper, an analytic method is built for the uncertainty and sensitivity analysis of models in the presence of input correlations. With the method, it is straightforward to identify the importance of the independence and correlations of input variables in determining the model response. This allows one to decide whether or not the input correlations should be considered in practice. Numerical examples suggest the effectiveness and validation of our analytic method in the analysis of general models. A practical application of the method is also proposed to the uncertainty and sensitivity analysis of a deterministic HIV model.

  9. INFANT HEALTH PRODUCTION FUNCTIONS: WHAT A DIFFERENCE THE DATA MAKE

    PubMed Central

    Reichman, Nancy E.; Corman, Hope; Noonan, Kelly; Dave, Dhaval

    2008-01-01

    SUMMARY We examine the extent to which infant health production functions are sensitive to model specification and measurement error. We focus on the importance of typically unobserved but theoretically important variables (typically unobserved variables, TUVs), other non-standard covariates (NSCs), input reporting, and characterization of infant health. The TUVs represent wantedness, taste for risky behavior, and maternal health endowment. The NSCs include father characteristics. We estimate the effects of prenatal drug use, prenatal cigarette smoking, and First trimester prenatal care on birth weight, low birth weight, and a measure of abnormal infant health conditions. We compare estimates using self-reported inputs versus input measures that combine information from medical records and self-reports. We find that TUVs and NSCs are significantly associated with both inputs and outcomes, but that excluding them from infant health production functions does not appreciably affect the input estimates. However, using self-reported inputs leads to overestimated effects of inputs, particularly prenatal care, on outcomes, and using a direct measure of infant health does not always yield input estimates similar to those when using birth weight outcomes. The findings have implications for research, data collection, and public health policy. PMID:18792077

  10. Effects of input uncertainty on cross-scale crop modeling

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input data from very little to very detailed information, and compare the models' abilities to represent the spatial variability and temporal variability in crop yields. We display the uncertainty in crop yield simulations from different input data and crop models in Taylor diagrams which are a graphical summary of the similarity between simulations and observations (Taylor, 2001). The observed spatial variability can be represented well from both models (R=0.6-0.8) but APSIM predicts higher spatial variability than LPJmL due to its sensitivity to soil parameters. Simulations with the same crop model, climate and sowing dates have similar statistics and therefore similar skill to reproduce the observed spatial variability. Soil data is less important for the skill of a crop model to reproduce the observed spatial variability. However, the uncertainty in simulated spatial variability from the two crop models is larger than from input data settings and APSIM is more sensitive to input data then LPJmL. Even with a detailed, point-scale crop model and detailed input data it is difficult to capture the complexity and diversity in maize cropping systems.

  11. Sobol' sensitivity analysis for stressor impacts on honeybee ...

    EPA Pesticide Factsheets

    We employ Monte Carlo simulation and nonlinear sensitivity analysis techniques to describe the dynamics of a bee exposure model, VarroaPop. Daily simulations are performed of hive population trajectories, taking into account queen strength, foraging success, mite impacts, weather, colony resources, population structure, and other important variables. This allows us to test the effects of defined pesticide exposure scenarios versus controlled simulations that lack pesticide exposure. The daily resolution of the model also allows us to conditionally identify sensitivity metrics. We use the variancebased global decomposition sensitivity analysis method, Sobol’, to assess firstand secondorder parameter sensitivities within VarroaPop, allowing us to determine how variance in the output is attributed to each of the input variables across different exposure scenarios. Simulations with VarroaPop indicate queen strength, forager life span and pesticide toxicity parameters are consistent, critical inputs for colony dynamics. Further analysis also reveals that the relative importance of these parameters fluctuates throughout the simulation period according to the status of other inputs. Our preliminary results show that model variability is conditional and can be attributed to different parameters depending on different timescales. By using sensitivity analysis to assess model output and variability, calibrations of simulation models can be better informed to yield more

  12. Using a Bayesian network to predict barrier island geomorphologic characteristics

    USGS Publications Warehouse

    Gutierrez, Ben; Plant, Nathaniel G.; Thieler, E. Robert; Turecek, Aaron

    2015-01-01

    Quantifying geomorphic variability of coastal environments is important for understanding and describing the vulnerability of coastal topography, infrastructure, and ecosystems to future storms and sea level rise. Here we use a Bayesian network (BN) to test the importance of multiple interactions between barrier island geomorphic variables. This approach models complex interactions and handles uncertainty, which is intrinsic to future sea level rise, storminess, or anthropogenic processes (e.g., beach nourishment and other forms of coastal management). The BN was developed and tested at Assateague Island, Maryland/Virginia, USA, a barrier island with sufficient geomorphic and temporal variability to evaluate our approach. We tested the ability to predict dune height, beach width, and beach height variables using inputs that included longer-term, larger-scale, or external variables (historical shoreline change rates, distances to inlets, barrier width, mean barrier elevation, and anthropogenic modification). Data sets from three different years spanning nearly a decade sampled substantial temporal variability and serve as a proxy for analysis of future conditions. We show that distinct geomorphic conditions are associated with different long-term shoreline change rates and that the most skillful predictions of dune height, beach width, and beach height depend on including multiple input variables simultaneously. The predictive relationships are robust to variations in the amount of input data and to variations in model complexity. The resulting model can be used to evaluate scenarios related to coastal management plans and/or future scenarios where shoreline change rates may differ from those observed historically.

  13. Exploring the Impact of Different Input Data Types on Soil Variable Estimation Using the ICRAF-ISRIC Global Soil Spectral Database.

    PubMed

    Aitkenhead, Matt J; Black, Helaina I J

    2018-02-01

    Using the International Centre for Research in Agroforestry-International Soil Reference and Information Centre (ICRAF-ISRIC) global soil spectroscopy database, models were developed to estimate a number of soil variables using different input data types. These input types included: (1) site data only; (2) visible-near-infrared (Vis-NIR) diffuse reflectance spectroscopy only; (3) combined site and Vis-NIR data; (4) red-green-blue (RGB) color data only; and (5) combined site and RGB color data. The models produced variable estimation accuracy, with RGB only being generally worst and spectroscopy plus site being best. However, we showed that for certain variables, estimation accuracy levels achieved with the "site plus RGB input data" were sufficiently good to provide useful estimates (r 2  > 0.7). These included major elements (Ca, Si, Al, Fe), organic carbon, and cation exchange capacity. Estimates for bulk density, contrast-to-noise (C/N), and P were moderately good, but K was not well estimated using this model type. For the "spectra plus site" model, many more variables were well estimated, including many that are important indicators for agricultural productivity and soil health. Sum of cation, electrical conductivity, Si, Ca, and Al oxides, and C/N ratio were estimated using this approach with r 2 values > 0.9. This work provides a mechanism for identifying the cost-effectiveness of using different model input data, with associated costs, for estimating soil variables to required levels of accuracy.

  14. A Framework to Guide the Assessment of Human-Machine Systems.

    PubMed

    Stowers, Kimberly; Oglesby, James; Sonesh, Shirley; Leyva, Kevin; Iwig, Chelsea; Salas, Eduardo

    2017-03-01

    We have developed a framework for guiding measurement in human-machine systems. The assessment of safety and performance in human-machine systems often relies on direct measurement, such as tracking reaction time and accidents. However, safety and performance emerge from the combination of several variables. The assessment of precursors to safety and performance are thus an important part of predicting and improving outcomes in human-machine systems. As part of an in-depth literature analysis involving peer-reviewed, empirical articles, we located and classified variables important to human-machine systems, giving a snapshot of the state of science on human-machine system safety and performance. Using this information, we created a framework of safety and performance in human-machine systems. This framework details several inputs and processes that collectively influence safety and performance. Inputs are divided according to human, machine, and environmental inputs. Processes are divided into attitudes, behaviors, and cognitive variables. Each class of inputs influences the processes and, subsequently, outcomes that emerge in human-machine systems. This framework offers a useful starting point for understanding the current state of the science and measuring many of the complex variables relating to safety and performance in human-machine systems. This framework can be applied to the design, development, and implementation of automated machines in spaceflight, military, and health care settings. We present a hypothetical example in our write-up of how it can be used to aid in project success.

  15. Impact of clinical input variable uncertainties on ten-year atherosclerotic cardiovascular disease risk using new pooled cohort equations.

    PubMed

    Gupta, Himanshu; Schiros, Chun G; Sharifov, Oleg F; Jain, Apurva; Denney, Thomas S

    2016-08-31

    Recently released American College of Cardiology/American Heart Association (ACC/AHA) guideline recommends the Pooled Cohort equations for evaluating atherosclerotic cardiovascular risk of individuals. The impact of the clinical input variable uncertainties on the estimates of ten-year cardiovascular risk based on ACC/AHA guidelines is not known. Using a publicly available the National Health and Nutrition Examination Survey dataset (2005-2010), we computed maximum and minimum ten-year cardiovascular risks by assuming clinically relevant variations/uncertainties in input of age (0-1 year) and ±10 % variation in total-cholesterol, high density lipoprotein- cholesterol, and systolic blood pressure and by assuming uniform distribution of the variance of each variable. We analyzed the changes in risk category compared to the actual inputs at 5 % and 7.5 % risk limits as these limits define the thresholds for consideration of drug therapy in the new guidelines. The new-pooled cohort equations for risk estimation were implemented in a custom software package. Based on our input variances, changes in risk category were possible in up to 24 % of the population cohort at both 5 % and 7.5 % risk boundary limits. This trend was consistently noted across all subgroups except in African American males where most of the cohort had ≥7.5 % baseline risk regardless of the variation in the variables. The uncertainties in the input variables can alter the risk categorization. The impact of these variances on the ten-year risk needs to be incorporated into the patient/clinician discussion and clinical decision making. Incorporating good clinical practices for the measurement of critical clinical variables and robust standardization of laboratory parameters to more stringent reference standards is extremely important for successful implementation of the new guidelines. Furthermore, ability to customize the risk calculator inputs to better represent unique clinical circumstances specific to individual needs would be highly desirable in the future versions of the risk calculator.

  16. Propagation of variability in railway dynamic simulations: application to virtual homologation

    NASA Astrophysics Data System (ADS)

    Funfschilling, Christine; Perrin, Guillaume; Kraft, Sönke

    2012-01-01

    Railway dynamic simulations are increasingly used to predict and analyse the behaviour of the vehicle and of the track during their whole life cycle. Up to now however, no simulation has been used in the certification procedure even if the expected benefits are important: cheaper and shorter procedures, more objectivity, better knowledge of the behaviour around critical situations. Deterministic simulations are nevertheless too poor to represent the whole physical of the track/vehicle system which contains several sources of variability: variability of the mechanical parameters of a train among a class of vehicles (mass, stiffness and damping of different suspensions), variability of the contact parameters (friction coefficient, wheel and rail profiles) and variability of the track design and quality. This variability plays an important role on the safety, on the ride quality, and thus on the certification criteria. When using the simulation for certification purposes, it seems therefore crucial to take into account the variability of the different inputs. The main goal of this article is thus to propose a method to introduce the variability in railway dynamics. A four-step method is described namely the definition of the stochastic problem, the modelling of the inputs variability, the propagation and the analysis of the output. Each step is illustrated with railway examples.

  17. How model and input uncertainty impact maize yield simulations in West Africa

    NASA Astrophysics Data System (ADS)

    Waha, Katharina; Huth, Neil; Carberry, Peter; Wang, Enli

    2015-02-01

    Crop models are common tools for simulating crop yields and crop production in studies on food security and global change. Various uncertainties however exist, not only in the model design and model parameters, but also and maybe even more important in soil, climate and management input data. We analyze the performance of the point-scale crop model APSIM and the global scale crop model LPJmL with different climate and soil conditions under different agricultural management in the low-input maize-growing areas of Burkina Faso, West Africa. We test the models’ response to different levels of input information from little to detailed information on soil, climate (1961-2000) and agricultural management and compare the models’ ability to represent the observed spatial (between locations) and temporal variability (between years) in crop yields. We found that the resolution of different soil, climate and management information influences the simulated crop yields in both models. However, the difference between models is larger than between input data and larger between simulations with different climate and management information than between simulations with different soil information. The observed spatial variability can be represented well from both models even with little information on soils and management but APSIM simulates a higher variation between single locations than LPJmL. The agreement of simulated and observed temporal variability is lower due to non-climatic factors e.g. investment in agricultural research and development between 1987 and 1991 in Burkina Faso which resulted in a doubling of maize yields. The findings of our study highlight the importance of scale and model choice and show that the most detailed input data does not necessarily improve model performance.

  18. Including long-range dependence in integrate-and-fire models of the high interspike-interval variability of cortical neurons.

    PubMed

    Jackson, B Scott

    2004-10-01

    Many different types of integrate-and-fire models have been designed in order to explain how it is possible for a cortical neuron to integrate over many independent inputs while still producing highly variable spike trains. Within this context, the variability of spike trains has been almost exclusively measured using the coefficient of variation of interspike intervals. However, another important statistical property that has been found in cortical spike trains and is closely associated with their high firing variability is long-range dependence. We investigate the conditions, if any, under which such models produce output spike trains with both interspike-interval variability and long-range dependence similar to those that have previously been measured from actual cortical neurons. We first show analytically that a large class of high-variability integrate-and-fire models is incapable of producing such outputs based on the fact that their output spike trains are always mathematically equivalent to renewal processes. This class of models subsumes a majority of previously published models, including those that use excitation-inhibition balance, correlated inputs, partial reset, or nonlinear leakage to produce outputs with high variability. Next, we study integrate-and-fire models that have (nonPoissonian) renewal point process inputs instead of the Poisson point process inputs used in the preceding class of models. The confluence of our analytical and simulation results implies that the renewal-input model is capable of producing high variability and long-range dependence comparable to that seen in spike trains recorded from cortical neurons, but only if the interspike intervals of the inputs have infinite variance, a physiologically unrealistic condition. Finally, we suggest a new integrate-and-fire model that does not suffer any of the previously mentioned shortcomings. By analyzing simulation results for this model, we show that it is capable of producing output spike trains with interspike-interval variability and long-range dependence that match empirical data from cortical spike trains. This model is similar to the other models in this study, except that its inputs are fractional-gaussian-noise-driven Poisson processes rather than renewal point processes. In addition to this model's success in producing realistic output spike trains, its inputs have long-range dependence similar to that found in most subcortical neurons in sensory pathways, including the inputs to cortex. Analysis of output spike trains from simulations of this model also shows that a tight balance between the amounts of excitation and inhibition at the inputs to cortical neurons is not necessary for high interspike-interval variability at their outputs. Furthermore, in our analysis of this model, we show that the superposition of many fractional-gaussian-noise-driven Poisson processes does not approximate a Poisson process, which challenges the common assumption that the total effect of a large number of inputs on a neuron is well represented by a Poisson process.

  19. Exemplar Variability Facilitates Retention of Word Learning by Children with Specific Language Impairment

    ERIC Educational Resources Information Center

    Aguilar, Jessica M.; Plante, Elena; Sandoval, Michelle

    2018-01-01

    Purpose: Variability in the input plays an important role in language learning. The current study examined the role of object variability for new word learning by preschoolers with specific language impairment (SLI). Method: Eighteen 4- and 5-year-old children with SLI were taught 8 new words in 3 short activities over the course of 3 sessions.…

  20. Variable response by aquatic invertebrates to experimental manipulations of leaf litter input into seasonal woodland ponds

    Treesearch

    Darold P. Batzer; Brian J. Palik

    2007-01-01

    Aquatic invertebrates are crucial components of foodwebs in seasonal woodland ponds, and leaf litter is probably the most important food resource for those organisms. We quantified the influence of leaf litter inputs on aquatic invertebrates in two seasonal woodland ponds using an interception experiment. Ponds were hydrologically split using a sandbag-plastic barrier...

  1. Does linguistic input play the same role in language learning for children with and without early brain injury?

    PubMed

    Rowe, Meredith L; Levine, Susan C; Fisher, Joan A; Goldin-Meadow, Susan

    2009-01-01

    Children with unilateral pre- or perinatal brain injury (BI) show remarkable plasticity for language learning. Previous work highlights the important role that lesion characteristics play in explaining individual variation in plasticity in the language development of children with BI. The current study examines whether the linguistic input that children with BI receive from their caregivers also contributes to this early plasticity, and whether linguistic input plays a similar role in children with BI as it does in typically developing (TD) children. Growth in vocabulary and syntactic production is modeled for 80 children (53 TD, 27 BI) between 14 and 46 months. Findings indicate that caregiver input is an equally potent predictor of vocabulary growth in children with BI and in TD children. In contrast, input is a more potent predictor of syntactic growth for children with BI than for TD children. Controlling for input, lesion characteristics (lesion size, type, seizure history) also affect the language trajectories of children with BI. Thus, findings illustrate how both variability in the environment (linguistic input) and variability in the organism (lesion characteristics) work together to contribute to plasticity in language learning.

  2. A Study on the Effects of Spatial Scale on Snow Process in Hyper-Resolution Hydrological Modelling over Mountainous Areas

    NASA Astrophysics Data System (ADS)

    Garousi Nejad, I.; He, S.; Tang, Q.; Ogden, F. L.; Steinke, R. C.; Frazier, N.; Tarboton, D. G.; Ohara, N.; Lin, H.

    2017-12-01

    Spatial scale is one of the main considerations in hydrological modeling of snowmelt in mountainous areas. The size of model elements controls the degree to which variability can be explicitly represented versus what needs to be parameterized using effective properties such as averages or other subgrid variability parameterizations that may degrade the quality of model simulations. For snowmelt modeling terrain parameters such as slope, aspect, vegetation and elevation play an important role in the timing and quantity of snowmelt that serves as an input to hydrologic runoff generation processes. In general, higher resolution enhances the accuracy of the simulation since fine meshes represent and preserve the spatial variability of atmospheric and surface characteristics better than coarse resolution. However, this increases computational cost and there may be a scale beyond which the model response does not improve due to diminishing sensitivity to variability and irreducible uncertainty associated with the spatial interpolation of inputs. This paper examines the influence of spatial resolution on the snowmelt process using simulations of and data from the Animas River watershed, an alpine mountainous area in Colorado, USA, using an unstructured distributed physically based hydrological model developed for a parallel computing environment, ADHydro. Five spatial resolutions (30 m, 100 m, 250 m, 500 m, and 1 km) were used to investigate the variations in hydrologic response. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of ADHydro to obtain a balance between representing spatial variability and the computational cost. According to the results, variation in the input variables and parameters due to using different spatial resolution resulted in changes in the obtained hydrological variables, especially snowmelt, both at the basin-scale and distributed across the model mesh.

  3. Input variable selection for data-driven models of Coriolis flowmeters for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Wang, Lijuan; Yan, Yong; Wang, Xue; Wang, Tao

    2017-03-01

    Input variable selection is an essential step in the development of data-driven models for environmental, biological and industrial applications. Through input variable selection to eliminate the irrelevant or redundant variables, a suitable subset of variables is identified as the input of a model. Meanwhile, through input variable selection the complexity of the model structure is simplified and the computational efficiency is improved. This paper describes the procedures of the input variable selection for the data-driven models for the measurement of liquid mass flowrate and gas volume fraction under two-phase flow conditions using Coriolis flowmeters. Three advanced input variable selection methods, including partial mutual information (PMI), genetic algorithm-artificial neural network (GA-ANN) and tree-based iterative input selection (IIS) are applied in this study. Typical data-driven models incorporating support vector machine (SVM) are established individually based on the input candidates resulting from the selection methods. The validity of the selection outcomes is assessed through an output performance comparison of the SVM based data-driven models and sensitivity analysis. The validation and analysis results suggest that the input variables selected from the PMI algorithm provide more effective information for the models to measure liquid mass flowrate while the IIS algorithm provides a fewer but more effective variables for the models to predict gas volume fraction.

  4. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study

    PubMed Central

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-01-01

    Background The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Methods Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. Results The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. Conclusion When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results. PMID:17714598

  5. Parameter and model uncertainty in a life-table model for fine particles (PM2.5): a statistical modeling study.

    PubMed

    Tainio, Marko; Tuomisto, Jouni T; Hänninen, Otto; Ruuskanen, Juhani; Jantunen, Matti J; Pekkanen, Juha

    2007-08-23

    The estimation of health impacts involves often uncertain input variables and assumptions which have to be incorporated into the model structure. These uncertainties may have significant effects on the results obtained with model, and, thus, on decision making. Fine particles (PM2.5) are believed to cause major health impacts, and, consequently, uncertainties in their health impact assessment have clear relevance to policy-making. We studied the effects of various uncertain input variables by building a life-table model for fine particles. Life-expectancy of the Helsinki metropolitan area population and the change in life-expectancy due to fine particle exposures were predicted using a life-table model. A number of parameter and model uncertainties were estimated. Sensitivity analysis for input variables was performed by calculating rank-order correlations between input and output variables. The studied model uncertainties were (i) plausibility of mortality outcomes and (ii) lag, and parameter uncertainties (iii) exposure-response coefficients for different mortality outcomes, and (iv) exposure estimates for different age groups. The monetary value of the years-of-life-lost and the relative importance of the uncertainties related to monetary valuation were predicted to compare the relative importance of the monetary valuation on the health effect uncertainties. The magnitude of the health effects costs depended mostly on discount rate, exposure-response coefficient, and plausibility of the cardiopulmonary mortality. Other mortality outcomes (lung cancer, other non-accidental and infant mortality) and lag had only minor impact on the output. The results highlight the importance of the uncertainties associated with cardiopulmonary mortality in the fine particle impact assessment when compared with other uncertainties. When estimating life-expectancy, the estimates used for cardiopulmonary exposure-response coefficient, discount rate, and plausibility require careful assessment, while complicated lag estimates can be omitted without this having any major effect on the results.

  6. Data-driven process decomposition and robust online distributed modelling for large-scale processes

    NASA Astrophysics Data System (ADS)

    Shu, Zhang; Lijuan, Li; Lijuan, Yao; Shipin, Yang; Tao, Zou

    2018-02-01

    With the increasing attention of networked control, system decomposition and distributed models show significant importance in the implementation of model-based control strategy. In this paper, a data-driven system decomposition and online distributed subsystem modelling algorithm was proposed for large-scale chemical processes. The key controlled variables are first partitioned by affinity propagation clustering algorithm into several clusters. Each cluster can be regarded as a subsystem. Then the inputs of each subsystem are selected by offline canonical correlation analysis between all process variables and its controlled variables. Process decomposition is then realised after the screening of input and output variables. When the system decomposition is finished, the online subsystem modelling can be carried out by recursively block-wise renewing the samples. The proposed algorithm was applied in the Tennessee Eastman process and the validity was verified.

  7. The impact of 14-nm photomask uncertainties on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Lin, Tim; Schultze, Steffen; Buck, Peter; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-04-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models, which must balance accuracy demands with simulation runtime boundary conditions, rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. While certain system input variables, such as scanner numerical aperture, can be empirically tuned to wafer CD data over a small range around the presumed set point, it can be dangerous to do so since CD errors can alias across multiple input variables. Therefore, many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine with a simulation sensitivity study, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD Bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and awareness.

  8. On stability of the solutions of inverse problem for determining the right-hand side of a degenerate parabolic equation with two independent variables

    NASA Astrophysics Data System (ADS)

    Kamynin, V. L.; Bukharova, T. I.

    2017-01-01

    We prove the estimates of stability with respect to perturbations of input data for the solutions of inverse problems for degenerate parabolic equations with unbounded coefficients. An important feature of these estimates is that the constants in these estimates are written out explicitly by the input data of the problem.

  9. Chapter 13: Assessing Persistence and Other Evaluation Issues Cross-Cutting Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Violette, Daniel M.

    Addressing other evaluation issues that have been raised in the context of energy efficiency programs, this chapter focuses on methods used to address the persistence of energy savings, which is an important input to the benefit/cost analysis of energy efficiency programs and portfolios. In addition to discussing 'persistence' (which refers to the stream of benefits over time from an energy efficiency measure or program), this chapter provides a summary treatment of these issues -Synergies across programs -Rebound -Dual baselines -Errors in variables (the measurement and/or accuracy of input variables to the evaluation).

  10. Soil organic carbon dynamics jointly controlled by climate, carbon inputs, soil properties and soil carbon fractions.

    PubMed

    Luo, Zhongkui; Feng, Wenting; Luo, Yiqi; Baldock, Jeff; Wang, Enli

    2017-10-01

    Soil organic carbon (SOC) dynamics are regulated by the complex interplay of climatic, edaphic and biotic conditions. However, the interrelation of SOC and these drivers and their potential connection networks are rarely assessed quantitatively. Using observations of SOC dynamics with detailed soil properties from 90 field trials at 28 sites under different agroecosystems across the Australian cropping regions, we investigated the direct and indirect effects of climate, soil properties, carbon (C) inputs and soil C pools (a total of 17 variables) on SOC change rate (r C , Mg C ha -1  yr -1 ). Among these variables, we found that the most influential variables on r C were the average C input amount and annual precipitation, and the total SOC stock at the beginning of the trials. Overall, C inputs (including C input amount and pasture frequency in the crop rotation system) accounted for 27% of the relative influence on r C , followed by climate 25% (including precipitation and temperature), soil C pools 24% (including pool size and composition) and soil properties (such as cation exchange capacity, clay content, bulk density) 24%. Path analysis identified a network of intercorrelations of climate, soil properties, C inputs and soil C pools in determining r C . The direct correlation of r C with climate was significantly weakened if removing the effects of soil properties and C pools, and vice versa. These results reveal the relative importance of climate, soil properties, C inputs and C pools and their complex interconnections in regulating SOC dynamics. Ignorance of the impact of changes in soil properties, C pool composition and C input (quantity and quality) on SOC dynamics is likely one of the main sources of uncertainty in SOC predictions from the process-based SOC models. © 2017 John Wiley & Sons Ltd.

  11. Role of Updraft Velocity in Temporal Variability of Global Cloud Hydrometeor Number

    NASA Technical Reports Server (NTRS)

    Sullivan, Sylvia C.; Lee, Dong Min; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-01-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  12. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    DOE PAGES

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; ...

    2016-05-16

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Communitymore » Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Finally, coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.« less

  13. Role of updraft velocity in temporal variability of global cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia C.; Lee, Dongmin; Oreopoulos, Lazaros; Nenes, Athanasios

    2016-05-01

    Understanding how dynamical and aerosol inputs affect the temporal variability of hydrometeor formation in climate models will help to explain sources of model diversity in cloud forcing, to provide robust comparisons with data, and, ultimately, to reduce the uncertainty in estimates of the aerosol indirect effect. This variability attribution can be done at various spatial and temporal resolutions with metrics derived from online adjoint sensitivities of droplet and crystal number to relevant inputs. Such metrics are defined and calculated from simulations using the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1 (CAM5.1). Input updraft velocity fluctuations can explain as much as 48% of temporal variability in output ice crystal number and 61% in droplet number in GEOS-5 and up to 89% of temporal variability in output ice crystal number in CAM5.1. In both models, this vertical velocity attribution depends strongly on altitude. Despite its importance for hydrometeor formation, simulated vertical velocity distributions are rarely evaluated against observations due to the sparsity of relevant data. Coordinated effort by the atmospheric community to develop more consistent, observationally based updraft treatments will help to close this knowledge gap.

  14. When Can Information from Ordinal Scale Variables Be Integrated?

    ERIC Educational Resources Information Center

    Kemp, Simon; Grace, Randolph C.

    2010-01-01

    Many theoretical constructs of interest to psychologists are multidimensional and derive from the integration of several input variables. We show that input variables that are measured on ordinal scales cannot be combined to produce a stable weakly ordered output variable that allows trading off the input variables. Instead a partial order is…

  15. Rotary ultrasonic machining of CFRP: a mechanistic predictive model for cutting force.

    PubMed

    Cong, W L; Pei, Z J; Sun, X; Zhang, C L

    2014-02-01

    Cutting force is one of the most important output variables in rotary ultrasonic machining (RUM) of carbon fiber reinforced plastic (CFRP) composites. Many experimental investigations on cutting force in RUM of CFRP have been reported. However, in the literature, there are no cutting force models for RUM of CFRP. This paper develops a mechanistic predictive model for cutting force in RUM of CFRP. The material removal mechanism of CFRP in RUM has been analyzed first. The model is based on the assumption that brittle fracture is the dominant mode of material removal. CFRP micromechanical analysis has been conducted to represent CFRP as an equivalent homogeneous material to obtain the mechanical properties of CFRP from its components. Based on this model, relationships between input variables (including ultrasonic vibration amplitude, tool rotation speed, feedrate, abrasive size, and abrasive concentration) and cutting force can be predicted. The relationships between input variables and important intermediate variables (indentation depth, effective contact time, and maximum impact force of single abrasive grain) have been investigated to explain predicted trends of cutting force. Experiments are conducted to verify the model, and experimental results agree well with predicted trends from this model. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Sensitivity analysis of a sound absorption model with correlated inputs

    NASA Astrophysics Data System (ADS)

    Chai, W.; Christen, J.-L.; Zine, A.-M.; Ichchou, M.

    2017-04-01

    Sound absorption in porous media is a complex phenomenon, which is usually addressed with homogenized models, depending on macroscopic parameters. Since these parameters emerge from the structure at microscopic scale, they may be correlated. This paper deals with sensitivity analysis methods of a sound absorption model with correlated inputs. Specifically, the Johnson-Champoux-Allard model (JCA) is chosen as the objective model with correlation effects generated by a secondary micro-macro semi-empirical model. To deal with this case, a relatively new sensitivity analysis method Fourier Amplitude Sensitivity Test with Correlation design (FASTC), based on Iman's transform, is taken into application. This method requires a priori information such as variables' marginal distribution functions and their correlation matrix. The results are compared to the Correlation Ratio Method (CRM) for reference and validation. The distribution of the macroscopic variables arising from the microstructure, as well as their correlation matrix are studied. Finally the results of tests shows that the correlation has a very important impact on the results of sensitivity analysis. Assessment of correlation strength among input variables on the sensitivity analysis is also achieved.

  17. Assessing the importance of rainfall uncertainty on hydrological models with different spatial and temporal scale

    NASA Astrophysics Data System (ADS)

    Nossent, Jiri; Pereira, Fernando; Bauwens, Willy

    2015-04-01

    Precipitation is one of the key inputs for hydrological models. As long as the values of the hydrological model parameters are fixed, a variation of the rainfall input is expected to induce a change in the model output. Given the increased awareness of uncertainty on rainfall records, it becomes more important to understand the impact of this input - output dynamic. Yet, modellers often still have the intention to mimic the observed flow, whatever the deviation of the employed records from the actual rainfall might be, by recklessly adapting the model parameter values. But is it actually possible to vary the model parameter values in such a way that a certain (observed) model output can be generated based on inaccurate rainfall inputs? Thus, how important is the rainfall uncertainty for the model output with respect to the model parameter importance? To address this question, we apply the Sobol' sensitivity analysis method to assess and compare the importance of the rainfall uncertainty and the model parameters on the output of the hydrological model. In order to be able to treat the regular model parameters and input uncertainty in the same way, and to allow a comparison of their influence, a possible approach is to represent the rainfall uncertainty by a parameter. To tackle the latter issue, we apply so called rainfall multipliers on hydrological independent storm events, as a probabilistic parameter representation of the possible rainfall variation. As available rainfall records are very often point measurements at a discrete time step (hourly, daily, monthly,…), they contain uncertainty due to a latent lack of spatial and temporal variability. The influence of the latter variability can also be different for hydrological models with different spatial and temporal scale. Therefore, we perform the sensitivity analyses on a semi-distributed model (SWAT) and a lumped model (NAM). The assessment and comparison of the importance of the rainfall uncertainty and the model parameters is achieved by considering different scenarios for the included parameters and the state of the models.

  18. Nonlinear Dynamic Models in Advanced Life Support

    NASA Technical Reports Server (NTRS)

    Jones, Harry

    2002-01-01

    To facilitate analysis, ALS systems are often assumed to be linear and time invariant, but they usually have important nonlinear and dynamic aspects. Nonlinear dynamic behavior can be caused by time varying inputs, changes in system parameters, nonlinear system functions, closed loop feedback delays, and limits on buffer storage or processing rates. Dynamic models are usually cataloged according to the number of state variables. The simplest dynamic models are linear, using only integration, multiplication, addition, and subtraction of the state variables. A general linear model with only two state variables can produce all the possible dynamic behavior of linear systems with many state variables, including stability, oscillation, or exponential growth and decay. Linear systems can be described using mathematical analysis. Nonlinear dynamics can be fully explored only by computer simulations of models. Unexpected behavior is produced by simple models having only two or three state variables with simple mathematical relations between them. Closed loop feedback delays are a major source of system instability. Exceeding limits on buffer storage or processing rates forces systems to change operating mode. Different equilibrium points may be reached from different initial conditions. Instead of one stable equilibrium point, the system may have several equilibrium points, oscillate at different frequencies, or even behave chaotically, depending on the system inputs and initial conditions. The frequency spectrum of an output oscillation may contain harmonics and the sums and differences of input frequencies, but it may also contain a stable limit cycle oscillation not related to input frequencies. We must investigate the nonlinear dynamic aspects of advanced life support systems to understand and counter undesirable behavior.

  19. Multiple stressors, nonlinear effects and the implications of climate change impacts on marine coastal ecosystems.

    PubMed

    Hewitt, Judi E; Ellis, Joanne I; Thrush, Simon F

    2016-08-01

    Global climate change will undoubtedly be a pressure on coastal marine ecosystems, affecting not only species distributions and physiology but also ecosystem functioning. In the coastal zone, the environmental variables that may drive ecological responses to climate change include temperature, wave energy, upwelling events and freshwater inputs, and all act and interact at a variety of spatial and temporal scales. To date, we have a poor understanding of how climate-related environmental changes may affect coastal marine ecosystems or which environmental variables are likely to produce priority effects. Here we use time series data (17 years) of coastal benthic macrofauna to investigate responses to a range of climate-influenced variables including sea-surface temperature, southern oscillation indices (SOI, Z4), wind-wave exposure, freshwater inputs and rainfall. We investigate responses from the abundances of individual species to abundances of functional traits and test whether species that are near the edge of their tolerance to another stressor (in this case sedimentation) may exhibit stronger responses. The responses we observed were all nonlinear and some exhibited thresholds. While temperature was most frequently an important predictor, wave exposure and ENSO-related variables were also frequently important and most ecological variables responded to interactions between environmental variables. There were also indications that species sensitive to another stressor responded more strongly to weaker climate-related environmental change at the stressed site than the unstressed site. The observed interactions between climate variables, effects on key species or functional traits, and synergistic effects of additional anthropogenic stressors have important implications for understanding and predicting the ecological consequences of climate change to coastal ecosystems. © 2015 John Wiley & Sons Ltd.

  20. Hourly air pollution concentrations and their important predictors over Houston, Texas using deep neural networks: case study of DISCOVER-AQ time period

    NASA Astrophysics Data System (ADS)

    Eslami, E.; Choi, Y.; Roy, A.

    2017-12-01

    Air quality forecasting carried out by chemical transport models often show significant error. This study uses a deep-learning approach over the Houston-Galveston-Brazoria (HGB) area to overcome this forecasting challenge, for the DISCOVER-AQ period (September 2013). Two approaches, deep neural network (DNN) using a Multi-Layer Perceptron (MLP) and Restricted Boltzmann Machine (RBM) were utilized. The proposed approaches analyzed input data by identifying features abstracted from its previous layer using a stepwise method. The approaches predicted hourly ozone and PM in September 2013 using several predictors of prior three days, including wind fields, temperature, relative humidity, cloud fraction, precipitation along with PM, ozone, and NOx concentrations. Model-measurement comparisons for available monitoring sites reported Indexes of Agreement (IOA) of around 0.95 for both DNN and RBM. A standard artificial neural network (ANN) (IOA=0.90) with similar architecture showed poorer performance than the deep networks, clearly demonstrating the superiority of the deep approaches. Additionally, each network (both deep and standard) performed significantly better than a previous CMAQ study, which showed an IOA of less than 0.80. The most influential input variables were identified using their associated weights, which represented the sensitivity of ozone to input parameters. The results indicate deep learning approaches can achieve more accurate ozone forecasting and identify the important input variables for ozone predictions in metropolitan areas.

  1. A cortical motor nucleus drives the basal ganglia-recipient thalamus in singing birds

    PubMed Central

    Goldberg, Jesse H.

    2012-01-01

    The pallido-recipient thalamus transmits information from the basal ganglia (BG) to the cortex and plays a critical role motor initiation and learning. Thalamic activity is strongly inhibited by pallidal inputs from the BG, but the role of non-pallidal inputs, such as excitatory inputs from cortex, is unclear. We have recorded simultaneously from presynaptic pallidal axon terminals and postsynaptic thalamocortical neurons in a BG-recipient thalamic nucleus necessary for vocal variability and learning in zebra finches. We found that song-locked rate modulations in the thalamus could not be explained by pallidal inputs alone, and persisted following pallidal lesion. Instead, thalamic activity was likely driven by inputs from a motor ‘cortical’ nucleus also necessary for singing. These findings suggest a role for cortical inputs to the pallido-recipient thalamus in driving premotor signals important for exploratory behavior and learning. PMID:22327474

  2. Stochastic analysis of multiphase flow in porous media: II. Numerical simulations

    NASA Astrophysics Data System (ADS)

    Abin, A.; Kalurachchi, J. J.; Kemblowski, M. W.; Chang, C.-M.

    1996-08-01

    The first paper (Chang et al., 1995b) of this two-part series described the stochastic analysis using spectral/perturbation approach to analyze steady state two-phase (water and oil) flow in a, liquid-unsaturated, three fluid-phase porous medium. In this paper, the results between the numerical simulations and closed-form expressions obtained using the perturbation approach are compared. We present the solution to the one-dimensional, steady-state oil and water flow equations. The stochastic input processes are the spatially correlated logk where k is the intrinsic permeability and the soil retention parameter, α. These solutions are subsequently used in the numerical simulations to estimate the statistical properties of the key output processes. The comparison between the results of the perturbation analysis and numerical simulations showed a good agreement between the two methods over a wide range of logk variability with three different combinations of input stochastic processes of logk and soil parameter α. The results clearly demonstrated the importance of considering the spatial variability of key subsurface properties under a variety of physical scenarios. The variability of both capillary pressure and saturation is affected by the type of input stochastic process used to represent the spatial variability. The results also demonstrated the applicability of perturbation theory in predicting the system variability and defining effective fluid properties through the ergodic assumption.

  3. Development of a prototype land use model for statewide transportation planning activities.

    DOT National Transportation Integrated Search

    2011-11-30

    Future land use forecasting is an important input to transportation planning modeling. Traditionally, land use is allocated to individual : traffic analysis zones (TAZ) based on variables such as the amount of vacant land, zoning restriction, land us...

  4. Spike-Threshold Adaptation Predicted by Membrane Potential Dynamics In Vivo

    PubMed Central

    Fontaine, Bertrand; Peña, José Luis; Brette, Romain

    2014-01-01

    Neurons encode information in sequences of spikes, which are triggered when their membrane potential crosses a threshold. In vivo, the spiking threshold displays large variability suggesting that threshold dynamics have a profound influence on how the combined input of a neuron is encoded in the spiking. Threshold variability could be explained by adaptation to the membrane potential. However, it could also be the case that most threshold variability reflects noise and processes other than threshold adaptation. Here, we investigated threshold variation in auditory neurons responses recorded in vivo in barn owls. We found that spike threshold is quantitatively predicted by a model in which the threshold adapts, tracking the membrane potential at a short timescale. As a result, in these neurons, slow voltage fluctuations do not contribute to spiking because they are filtered by threshold adaptation. More importantly, these neurons can only respond to input spikes arriving together on a millisecond timescale. These results demonstrate that fast adaptation to the membrane potential captures spike threshold variability in vivo. PMID:24722397

  5. Analyses of the most influential factors for vibration monitoring of planetary power transmissions in pellet mills by adaptive neuro-fuzzy technique

    NASA Astrophysics Data System (ADS)

    Milovančević, Miloš; Nikolić, Vlastimir; Anđelković, Boban

    2017-01-01

    Vibration-based structural health monitoring is widely recognized as an attractive strategy for early damage detection in civil structures. Vibration monitoring and prediction is important for any system since it can save many unpredictable behaviors of the system. If the vibration monitoring is properly managed, that can ensure economic and safe operations. Potentials for further improvement of vibration monitoring lie in the improvement of current control strategies. One of the options is the introduction of model predictive control. Multistep ahead predictive models of vibration are a starting point for creating a successful model predictive strategy. For the purpose of this article, predictive models of are created for vibration monitoring of planetary power transmissions in pellet mills. The models were developed using the novel method based on ANFIS (adaptive neuro fuzzy inference system). The aim of this study is to investigate the potential of ANFIS for selecting the most relevant variables for predictive models of vibration monitoring of pellet mills power transmission. The vibration data are collected by PIC (Programmable Interface Controller) microcontrollers. The goal of the predictive vibration monitoring of planetary power transmissions in pellet mills is to indicate deterioration in the vibration of the power transmissions before the actual failure occurs. The ANFIS process for variable selection was implemented in order to detect the predominant variables affecting the prediction of vibration monitoring. It was also used to select the minimal input subset of variables from the initial set of input variables - current and lagged variables (up to 11 steps) of vibration. The obtained results could be used for simplification of predictive methods so as to avoid multiple input variables. It was preferable to used models with less inputs because of overfitting between training and testing data. While the obtained results are promising, further work is required in order to get results that could be directly applied in practice.

  6. Fuzzy simulation in concurrent engineering

    NASA Technical Reports Server (NTRS)

    Kraslawski, A.; Nystrom, L.

    1992-01-01

    Concurrent engineering is becoming a very important practice in manufacturing. A problem in concurrent engineering is the uncertainty associated with the values of the input variables and operating conditions. The problem discussed in this paper concerns the simulation of processes where the raw materials and the operational parameters possess fuzzy characteristics. The processing of fuzzy input information is performed by the vertex method and the commercial simulation packages POLYMATH and GEMS. The examples are presented to illustrate the usefulness of the method in the simulation of chemical engineering processes.

  7. Attributing uncertainty in streamflow simulations due to variable inputs via the Quantile Flow Deviation metric

    NASA Astrophysics Data System (ADS)

    Shoaib, Syed Abu; Marshall, Lucy; Sharma, Ashish

    2018-06-01

    Every model to characterise a real world process is affected by uncertainty. Selecting a suitable model is a vital aspect of engineering planning and design. Observation or input errors make the prediction of modelled responses more uncertain. By way of a recently developed attribution metric, this study is aimed at developing a method for analysing variability in model inputs together with model structure variability to quantify their relative contributions in typical hydrological modelling applications. The Quantile Flow Deviation (QFD) metric is used to assess these alternate sources of uncertainty. The Australian Water Availability Project (AWAP) precipitation data for four different Australian catchments is used to analyse the impact of spatial rainfall variability on simulated streamflow variability via the QFD. The QFD metric attributes the variability in flow ensembles to uncertainty associated with the selection of a model structure and input time series. For the case study catchments, the relative contribution of input uncertainty due to rainfall is higher than that due to potential evapotranspiration, and overall input uncertainty is significant compared to model structure and parameter uncertainty. Overall, this study investigates the propagation of input uncertainty in a daily streamflow modelling scenario and demonstrates how input errors manifest across different streamflow magnitudes.

  8. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    NASA Astrophysics Data System (ADS)

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-04-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3-) input functions by characterizing unsaturated zone NO3- transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous "vertical flux method" (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3- source concentration factor (which determines the local NO3- input concentration); unsaturated zone travel time; NO3- concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3- "extinction depth", the eventual steady state depth of the NO3- front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 - 0.86 and 0.22 - 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3- at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3- extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3- transport processes in a statistical framework based on readily mappable GIS input variables.

  9. ASSESSING ACCURACY OF NET CHANGE DERIVED FROM LAND COVER MAPS

    EPA Science Inventory

    Net change derived from land-cover maps provides important descriptive information for environmental monitoring and is often used as an input or explanatory variable in environmental models. The sampling design and analysis for assessing net change accuracy differ from traditio...

  10. Effects of Anthropogenic Nitrogen Loading on Riverine Nitrogen Export in the Northeastern USA

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Goodale, C. L.; Howarth, R. W.

    2001-05-01

    Human activities have greatly altered the nitrogen (N) cycle, accelerating the rate of N fixation in landscapes and delivery of N to water bodies. To examine the effects of anthropogenic N inputs on riverine N export, we quantified N inputs and riverine N loss for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantified inputs of N to each catchment: atmospheric deposition, fertilizer application, agricultural and forest biological N fixation, and the net import of N in food and feed. We compared these inputs with N losses from the system in riverine export. The importance of the relative sources varies widely by watershed and is related to land use. Atmospheric deposition was the largest source (>60%) to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). Total N inputs to each catchment increased with percent cover in agriculture and urban land, and decreased with percent forest. Over the combined area of the catchments, net atmospheric deposition was the largest single source input (34%), followed by imports of N in food and feed (24%), fixation in agricultural lands (21%), fertilizer use (15%), and fixation in forests (6%). Riverine export of N is well correlated with N inputs, but it accounts for only a fraction (28%) of the total N inputs. This work provides an understanding of the sources of N in landscapes, and highlights how human activities impact N cycling in the northeast region.

  11. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    PubMed

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  12. Tool for obtaining projected future climate inputs for the WEPP and SWAT models

    USDA-ARS?s Scientific Manuscript database

    Climate change is an increasingly important issue affecting natural resources. Rising temperatures, reductions in snow cover, and variability in precipitation depths and intensities are altering the accepted normal approaches for predicting runoff, soil erosion, and chemical losses from upland areas...

  13. Mastoid vibration affects dynamic postural control during gait in healthy older adults

    NASA Astrophysics Data System (ADS)

    Chien, Jung Hung; Mukherjee, Mukul; Kent, Jenny; Stergiou, Nicholas

    2017-01-01

    Vestibular disorders are difficult to diagnose early due to the lack of a systematic assessment. Our previous work has developed a reliable experimental design and the result shows promising results that vestibular sensory input while walking could be affected through mastoid vibration (MV) and changes are in the direction of motion. In the present paper, we wanted to extend this work to older adults and investigate how manipulating sensory input through mastoid vibration (MV) could affect dynamic postural control during walking. Three levels of MV (none, unilateral, and bilateral) applied via vibrating elements placed on the mastoid processes were combined with the Locomotor Sensory Organization Test (LSOT) paradigm to challenge the visual and somatosensory systems. We hypothesized that the MV would affect sway variability during walking in older adults. Our results revealed that MV significantly not only increased the amount of sway variability but also decreased the temporal structure of sway variability only in anterior-posterior direction. Importantly, the bilateral MV stimulation generally produced larger effects than the unilateral. This is an important finding that confirmed our experimental design and the results produced could guide a more reliable screening of vestibular system deterioration.

  14. Linking annual N2O emission in organic soils to mineral nitrogen input as estimated by heterotrophic respiration and soil C/N ratio.

    PubMed

    Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti

    2014-01-01

    Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted.

  15. A neural circuit mechanism for regulating vocal variability during song learning in zebra finches.

    PubMed

    Garst-Orozco, Jonathan; Babadi, Baktash; Ölveczky, Bence P

    2014-12-15

    Motor skill learning is characterized by improved performance and reduced motor variability. The neural mechanisms that couple skill level and variability, however, are not known. The zebra finch, a songbird, presents a unique opportunity to address this question because production of learned song and induction of vocal variability are instantiated in distinct circuits that converge on a motor cortex analogue controlling vocal output. To probe the interplay between learning and variability, we made intracellular recordings from neurons in this area, characterizing how their inputs from the functionally distinct pathways change throughout song development. We found that inputs that drive stereotyped song-patterns are strengthened and pruned, while inputs that induce variability remain unchanged. A simple network model showed that strengthening and pruning of action-specific connections reduces the sensitivity of motor control circuits to variable input and neural 'noise'. This identifies a simple and general mechanism for learning-related regulation of motor variability.

  16. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy

    PubMed Central

    Knijnenburg, Theo A.; Klau, Gunnar W.; Iorio, Francesco; Garnett, Mathew J.; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F. A.

    2016-01-01

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present ‘Logic Optimization for Binary Input to Continuous Output’ (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models. PMID:27876821

  17. Logic models to predict continuous outputs based on binary inputs with an application to personalized cancer therapy.

    PubMed

    Knijnenburg, Theo A; Klau, Gunnar W; Iorio, Francesco; Garnett, Mathew J; McDermott, Ultan; Shmulevich, Ilya; Wessels, Lodewyk F A

    2016-11-23

    Mining large datasets using machine learning approaches often leads to models that are hard to interpret and not amenable to the generation of hypotheses that can be experimentally tested. We present 'Logic Optimization for Binary Input to Continuous Output' (LOBICO), a computational approach that infers small and easily interpretable logic models of binary input features that explain a continuous output variable. Applying LOBICO to a large cancer cell line panel, we find that logic combinations of multiple mutations are more predictive of drug response than single gene predictors. Importantly, we show that the use of the continuous information leads to robust and more accurate logic models. LOBICO implements the ability to uncover logic models around predefined operating points in terms of sensitivity and specificity. As such, it represents an important step towards practical application of interpretable logic models.

  18. Tropical Cyclone Footprint in the Ocean Mixed Layer Observed by Argo in the Northwest Pacific

    DTIC Science & Technology

    2014-10-25

    668. Hu, A., and G. A. Meehl (2009), Effect of the Atlantic hurricanes on the oceanic meridional overturning circulation and heat transport, Geo...atmospheric circulation [Hart et al., 2007]. Several studies, based on observations and modeling, suggest that TC-induced energy input and mixing may play...an important role in climate variability through regulating the oceanic general circulation and its variability [e.g., Emanuel, 2001; Sriver and Huber

  19. Bottom-up and Top-down Input Augment the Variability of Cortical Neurons

    PubMed Central

    Nassi, Jonathan J.; Kreiman, Gabriel; Born, Richard T.

    2016-01-01

    SUMMARY Neurons in the cerebral cortex respond inconsistently to a repeated sensory stimulus, yet they underlie our stable sensory experiences. Although the nature of this variability is unknown, its ubiquity has encouraged the general view that each cell produces random spike patterns that noisily represent its response rate. In contrast, here we show that reversibly inactivating distant sources of either bottom-up or top-down input to cortical visual areas in the alert primate reduces both the spike train irregularity and the trial-to-trial variability of single neurons. A simple model in which a fraction of the pre-synaptic input is silenced can reproduce this reduction in variability, provided that there exist temporal correlations primarily within, but not between, excitatory and inhibitory input pools. A large component of the variability of cortical neurons may therefore arise from synchronous input produced by signals arriving from multiple sources. PMID:27427459

  20. An analysis of input errors in precipitation-runoff models using regression with errors in the independent variables

    USGS Publications Warehouse

    Troutman, Brent M.

    1982-01-01

    Errors in runoff prediction caused by input data errors are analyzed by treating precipitation-runoff models as regression (conditional expectation) models. Independent variables of the regression consist of precipitation and other input measurements; the dependent variable is runoff. In models using erroneous input data, prediction errors are inflated and estimates of expected storm runoff for given observed input variables are biased. This bias in expected runoff estimation results in biased parameter estimates if these parameter estimates are obtained by a least squares fit of predicted to observed runoff values. The problems of error inflation and bias are examined in detail for a simple linear regression of runoff on rainfall and for a nonlinear U.S. Geological Survey precipitation-runoff model. Some implications for flood frequency analysis are considered. A case study using a set of data from Turtle Creek near Dallas, Texas illustrates the problems of model input errors.

  1. Metamodeling and mapping of nitrate flux in the unsaturated zone and groundwater, Wisconsin, USA

    USGS Publications Warehouse

    Nolan, Bernard T.; Green, Christopher T.; Juckem, Paul F.; Liao, Lixia; Reddy, James E.

    2018-01-01

    Nitrate contamination of groundwater in agricultural areas poses a major challenge to the sustainability of water resources. Aquifer vulnerability models are useful tools that can help resource managers identify areas of concern, but quantifying nitrogen (N) inputs in such models is challenging, especially at large spatial scales. We sought to improve regional nitrate (NO3−) input functions by characterizing unsaturated zone NO3− transport to groundwater through use of surrogate, machine-learning metamodels of a process-based N flux model. The metamodels used boosted regression trees (BRTs) to relate mappable landscape variables to parameters and outputs of a previous “vertical flux method” (VFM) applied at sampled wells in the Fox, Wolf, and Peshtigo (FWP) river basins in northeastern Wisconsin. In this context, the metamodels upscaled the VFM results throughout the region, and the VFM parameters and outputs are the metamodel response variables. The study area encompassed the domain of a detailed numerical model that provided additional predictor variables, including groundwater recharge, to the metamodels. We used a statistical learning framework to test a range of model complexities to identify suitable hyperparameters of the six BRT metamodels corresponding to each response variable of interest: NO3− source concentration factor (which determines the local NO3− input concentration); unsaturated zone travel time; NO3− concentration at the water table in 1980, 2000, and 2020 (three separate metamodels); and NO3− “extinction depth”, the eventual steady state depth of the NO3−front. The final metamodels were trained to 129 wells within the active numerical flow model area, and considered 58 mappable predictor variables compiled in a geographic information system (GIS). These metamodels had training and cross-validation testing R2 values of 0.52 – 0.86 and 0.22 – 0.38, respectively, and predictions were compiled as maps of the above response variables. Testing performance was reasonable, considering that we limited the metamodel predictor variables to mappable factors as opposed to using all available VFM input variables. Relationships between metamodel predictor variables and mapped outputs were generally consistent with expectations, e.g. with greater source concentrations and NO3− at the groundwater table in areas of intensive crop use and well drained soils. Shorter unsaturated zone travel times in poorly drained areas likely indicated preferential flow through clay soils, and a tendency for fine grained deposits to collocate with areas of shallower water table. Numerical estimates of groundwater recharge were important in the metamodels and may have been a proxy for N input and redox conditions in the northern FWP, which had shallow predicted NO3− extinction depth. The metamodel results provide proof-of-concept for regional characterization of unsaturated zone NO3− transport processes in a statistical framework based on readily mappable GIS input variables.

  2. Development of a neural-based forecasting tool to classify recreational water quality using fecal indicator organisms.

    PubMed

    Motamarri, Srinivas; Boccelli, Dominic L

    2012-09-15

    Users of recreational waters may be exposed to elevated pathogen levels through various point/non-point sources. Typical daily notifications rely on microbial analysis of indicator organisms (e.g., Escherichia coli) that require 18, or more, hours to provide an adequate response. Modeling approaches, such as multivariate linear regression (MLR) and artificial neural networks (ANN), have been utilized to provide quick predictions of microbial concentrations for classification purposes, but generally suffer from high false negative rates. This study introduces the use of learning vector quantization (LVQ)--a direct classification approach--for comparison with MLR and ANN approaches and integrates input selection for model development with respect to primary and secondary water quality standards within the Charles River Basin (Massachusetts, USA) using meteorologic, hydrologic, and microbial explanatory variables. Integrating input selection into model development showed that discharge variables were the most important explanatory variables while antecedent rainfall and time since previous events were also important. With respect to classification, all three models adequately represented the non-violated samples (>90%). The MLR approach had the highest false negative rates associated with classifying violated samples (41-62% vs 13-43% (ANN) and <16% (LVQ)) when using five or more explanatory variables. The ANN performance was more similar to LVQ when a larger number of explanatory variables were utilized, but the ANN performance degraded toward MLR performance as explanatory variables were removed. Overall, the use of LVQ as a direct classifier provided the best overall classification ability with respect to violated/non-violated samples for both standards. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Space sickness predictors suggest fluid shift involvement and possible countermeasures

    NASA Technical Reports Server (NTRS)

    Simanonok, K. E.; Moseley, E. C.; Charles, J. B.

    1992-01-01

    Preflight data from 64 first time Shuttle crew members were examined retrospectively to predict space sickness severity (NONE, MILD, MODERATE, or SEVERE) by discriminant analysis. From 9 input variables relating to fluid, electrolyte, and cardiovascular status, 8 variables were chosen by discriminant analysis that correctly predicted space sickness severity with 59 pct. success by one method of cross validation on the original sample and 67 pct. by another method. The 8 variables in order of their importance for predicting space sickness severity are sitting systolic blood pressure, serum uric acid, calculated blood volume, serum phosphate, urine osmolality, environmental temperature at the launch site, red cell count, and serum chloride. These results suggest the presence of predisposing physiologic factors to space sickness that implicate a fluid shift etiology. Addition of a 10th input variable, hours spent in the Weightless Environment Training Facility (WETF), improved the prediction of space sickness severity to 66 pct. success by the first method of cross validation on the original sample and to 71 pct. by the second method. The data suggest that WETF training may reduce space sickness severity.

  4. Value of Construction Company and its Dependence on Significant Variables

    NASA Astrophysics Data System (ADS)

    Vítková, E.; Hromádka, V.; Ondrušková, E.

    2017-10-01

    The paper deals with the value of the construction company assessment respecting usable approaches and determinable variables. The reasons of the value of the construction company assessment are different, but the most important reasons are the sale or the purchase of the company, the liquidation of the company, the fusion of the company with another subject or the others. According the reason of the value assessment it is possible to determine theoretically different approaches for valuation, mainly it concerns about the yield method of valuation and the proprietary method of valuation. Both approaches are dependant of detailed input variables, which quality will influence the final assessment of the company´s value. The main objective of the paper is to suggest, according to the analysis, possible ways of input variables, mainly in the form of expected cash-flows or the profit, determination. The paper is focused mainly on methods of time series analysis, regression analysis and mathematical simulation utilization. As the output, the results of the analysis on the case study will be demonstrated.

  5. Electrical resistivity tomography to delineate greenhouse soil variability

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Amato, M.; Bitella, G.; Bochicchio, R.

    2013-03-01

    Appropriate management of soil spatial variability is an important tool for optimizing farming inputs, with the result of yield increase and reduction of the environmental impact in field crops. Under greenhouses, several factors such as non-uniform irrigation and localized soil compaction can severely affect yield and quality. Additionally, if soil spatial variability is not taken into account, yield deficiencies are often compensated by extra-volumes of crop inputs; as a result, over-irrigation and overfertilization in some parts of the field may occur. Technology for spatially sound management of greenhouse crops is therefore needed to increase yield and quality and to address sustainability. In this experiment, 2D-electrical resistivity tomography was used as an exploratory tool to characterize greenhouse soil variability and its relations to wild rocket yield. Soil resistivity well matched biomass variation (R2=0.70), and was linked to differences in soil bulk density (R2=0.90), and clay content (R2=0.77). Electrical resistivity tomography shows a great potential in horticulture where there is a growing demand of sustainability coupled with the necessity of stabilizing yield and product quality.

  6. A Skylab program for the International Hydrological Decade (IHD). [Lake Ontario Basin

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C. (Principal Investigator); Rebel, D. L.

    1974-01-01

    The author has identified the following significant results. The development of the algorithm (using real data) relating red and IR reflectance to surface soil moisture over regions of variable vegetation cover will enable remote sensing to make direct inputs into determination of this important hydrologic parameter.

  7. EVALUATING THE SENSITIVITY OF A SUBSURFACE MULTICOMPONENT REACTIVE TRANSPORT MODEL WITH RESPECT TO TRANSPORT AND REACTION PARAMETERS

    EPA Science Inventory

    The input variables for a numerical model of reactive solute transport in groundwater include both transport parameters, such as hydraulic conductivity and infiltration, and reaction parameters that describe the important chemical and biological processes in the system. These pa...

  8. Statistical evaluation of control inputs and eye movements in the use of instruments clusters during aircraft landing

    NASA Technical Reports Server (NTRS)

    Dick, A. O.; Brown, J. L.; Bailey, G.

    1977-01-01

    Two different types of analyses were done on data from a study in which eye movements and other variables were recorded while four pilots executed landing sequences in a Boeing 737 simulation. Various conditions were manupulated, including changes in turbulence, starting position, and instrumentation. Control inputs were analyzed in the context of the various conditions and compared against ratings of workload obtained using the Cooper-Harper scale. A number of eye-scanning measures including mean dwell time and transition from one instrument to another were entered into a principal components factor analysis. The results show a differentiation between control inputs and eye-scanning behavior. This shows the need for improved definition of workload and experiments to uncover the important differences among control inputs, eye-scanning and cognitive processes of the pilot.

  9. Influence of estuarine processes on spatiotemporal variation in bioavailable selenium

    USGS Publications Warehouse

    Stewart, Robin; Luoma, Samuel N.; Elrick, Kent A.; Carter, James L.; van der Wegen, Mick

    2013-01-01

    Dynamic processes (physical, chemical and biological) challenge our ability to quantify and manage the ecological risk of chemical contaminants in estuarine environments. Selenium (Se) bioavailability (defined by bioaccumulation), stable isotopes and molar carbon-tonitrogen ratios in the benthic clam Potamocorbula amurensis, an important food source for predators, were determined monthly for 17 yr in northern San Francisco Bay. Se concentrations in the clams ranged from a low of 2 to a high of 22 μg g-1 over space and time. Little of that variability was stochastic, however. Statistical analyses and preliminary hydrodynamic modeling showed that a constant mid-estuarine input of Se, which was dispersed up- and down-estuary by tidal currents, explained the general spatial patterns in accumulated Se among stations. Regression of Se bioavailability against river inflows suggested that processes driven by inflows were the primary driver of seasonal variability. River inflow also appeared to explain interannual variability but within the range of Se enrichment established at each station by source inputs. Evaluation of risks from Se contamination in estuaries requires the consideration of spatial and temporal variability on multiple scales and of the processes that drive that variability.

  10. Biological control of appetite: A daunting complexity.

    PubMed

    MacLean, Paul S; Blundell, John E; Mennella, Julie A; Batterham, Rachel L

    2017-03-01

    This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) titled "Self-Regulation of Appetite-It's Complicated," which focused on the biological aspects of appetite regulation. This review summarizes the key biological inputs of appetite regulation and their implications for body weight regulation. These discussions offer an update of the long-held, rigid perspective of an "adipocentric" biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. It is only beginning to be understood how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally considered is a common theme that pervaded the workshop discussions, which was individual variability. It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. © 2017 The Obesity Society.

  11. Biological Control of Appetite: A Daunting Complexity

    PubMed Central

    MacLean, Paul S.; Blundell, John E.; Mennella, Julie A.; Batterham, Rachel L.

    2017-01-01

    Objective This review summarizes a portion of the discussions of an NIH Workshop (Bethesda, MD, 2015) entitled, “Self-Regulation of Appetite, It's Complicated,” which focused on the biological aspects of appetite regulation. Methods Here we summarize the key biological inputs of appetite regulation and their implications for body weight regulation. Results These discussions offer an update of the long-held, rigid perspective of an “adipocentric” biological control, taking a broader view that also includes important inputs from the digestive tract, from lean mass, and from the chemical sensory systems underlying taste and smell. We are only beginning to understand how these biological systems are integrated and how this integrated input influences appetite and food eating behaviors. The relevance of these biological inputs was discussed primarily in the context of obesity and the problem of weight regain, touching on topics related to the biological predisposition for obesity and the impact that obesity treatments (dieting, exercise, bariatric surgery, etc.) might have on appetite and weight loss maintenance. Finally, we consider a common theme that pervaded the workshop discussions, which was individual variability. Conclusions It is this individual variability in the predisposition for obesity and in the biological response to weight loss that makes the biological component of appetite regulation so complicated. When this individual biological variability is placed in the context of the diverse environmental and behavioral pressures that also influence food eating behaviors, it is easy to appreciate the daunting complexities that arise with the self-regulation of appetite. PMID:28229538

  12. Sensitivity and uncertainty in crop water footprint accounting: a case study for the Yellow River basin

    NASA Astrophysics Data System (ADS)

    Zhuo, L.; Mekonnen, M. M.; Hoekstra, A. Y.

    2014-06-01

    Water Footprint Assessment is a fast-growing field of research, but as yet little attention has been paid to the uncertainties involved. This study investigates the sensitivity of and uncertainty in crop water footprint (in m3 t-1) estimates related to uncertainties in important input variables. The study focuses on the green (from rainfall) and blue (from irrigation) water footprint of producing maize, soybean, rice, and wheat at the scale of the Yellow River basin in the period 1996-2005. A grid-based daily water balance model at a 5 by 5 arcmin resolution was applied to compute green and blue water footprints of the four crops in the Yellow River basin in the period considered. The one-at-a-time method was carried out to analyse the sensitivity of the crop water footprint to fractional changes of seven individual input variables and parameters: precipitation (PR), reference evapotranspiration (ET0), crop coefficient (Kc), crop calendar (planting date with constant growing degree days), soil water content at field capacity (Smax), yield response factor (Ky) and maximum yield (Ym). Uncertainties in crop water footprint estimates related to uncertainties in four key input variables: PR, ET0, Kc, and crop calendar were quantified through Monte Carlo simulations. The results show that the sensitivities and uncertainties differ across crop types. In general, the water footprint of crops is most sensitive to ET0 and Kc, followed by the crop calendar. Blue water footprints were more sensitive to input variability than green water footprints. The smaller the annual blue water footprint is, the higher its sensitivity to changes in PR, ET0, and Kc. The uncertainties in the total water footprint of a crop due to combined uncertainties in climatic inputs (PR and ET0) were about ±20% (at 95% confidence interval). The effect of uncertainties in ET0was dominant compared to that of PR. The uncertainties in the total water footprint of a crop as a result of combined key input uncertainties were on average ±30% (at 95% confidence level).

  13. Class identity assignment for amphetamines using neural networks and GC-FTIR data

    NASA Astrophysics Data System (ADS)

    Gosav, S.; Praisler, M.; Van Bocxlaer, J.; De Leenheer, A. P.; Massart, D. L.

    2006-08-01

    An exploratory analysis was performed in order to evaluate the feasibility of building of neural network (NN) systems automating the identification of amphetamines necessary in the investigation of drugs of abuse for epidemiological, clinical and forensic purposes. A first neural network system was built to distinguish between amphetamines and nonamphetamines. A second, more refined system, aimed to the recognition of amphetamines according to their toxicological activity (stimulant amphetamines, hallucinogenic amphetamines, nonamphetamines). Both systems proved that discrimination between amphetamines and nonamphetamines, as well as between stimulants, hallucinogens and nonamphetamines is possible (83.44% and 85.71% correct classification rate, respectively). The spectroscopic interpretation of the 40 most important input variables (GC-FTIR absorption intensities) shows that the modeling power of an input variable seems to be correlated with the stability and not with the intensity of the spectral interaction. Thus, discarding variables only because they correspond to spectral windows with weak absorptions does not seem be not advisable.

  14. A dual-input nonlinear system analysis of autonomic modulation of heart rate

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Mullen, T. J.; Cohen, R. J.

    1996-01-01

    Linear analyses of fluctuations in heart rate and other hemodynamic variables have been used to elucidate cardiovascular regulatory mechanisms. The role of nonlinear contributions to fluctuations in hemodynamic variables has not been fully explored. This paper presents a nonlinear system analysis of the effect of fluctuations in instantaneous lung volume (ILV) and arterial blood pressure (ABP) on heart rate (HR) fluctuations. To successfully employ a nonlinear analysis based on the Laguerre expansion technique (LET), we introduce an efficient procedure for broadening the spectral content of the ILV and ABP inputs to the model by adding white noise. Results from computer simulations demonstrate the effectiveness of broadening the spectral band of input signals to obtain consistent and stable kernel estimates with the use of the LET. Without broadening the band of the ILV and ABP inputs, the LET did not provide stable kernel estimates. Moreover, we extend the LET to the case of multiple inputs in order to accommodate the analysis of the combined effect of ILV and ABP effect on heart rate. Analyzes of data based on the second-order Volterra-Wiener model reveal an important contribution of the second-order kernels to the description of the effect of lung volume and arterial blood pressure on heart rate. Furthermore, physiological effects of the autonomic blocking agents propranolol and atropine on changes in the first- and second-order kernels are also discussed.

  15. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal.

    PubMed

    Karmakar, Chandan; Udhayakumar, Radhagayathri K; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy ( DistEn ) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters-the embedding dimension m , and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy ( ApEn ) and sample entropy ( SampEn ) measures. The performance of DistEn can also be affected by the data length N . In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter ( m or M ) or combination of two parameters ( N and M ). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn . The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series.

  16. Linking Annual N2O Emission in Organic Soils to Mineral Nitrogen Input as Estimated by Heterotrophic Respiration and Soil C/N Ratio

    PubMed Central

    Mu, Zhijian; Huang, Aiying; Ni, Jiupai; Xie, Deti

    2014-01-01

    Organic soils are an important source of N2O, but global estimates of these fluxes remain uncertain because measurements are sparse. We tested the hypothesis that N2O fluxes can be predicted from estimates of mineral nitrogen input, calculated from readily-available measurements of CO2 flux and soil C/N ratio. From studies of organic soils throughout the world, we compiled a data set of annual CO2 and N2O fluxes which were measured concurrently. The input of soil mineral nitrogen in these studies was estimated from applied fertilizer nitrogen and organic nitrogen mineralization. The latter was calculated by dividing the rate of soil heterotrophic respiration by soil C/N ratio. This index of mineral nitrogen input explained up to 69% of the overall variability of N2O fluxes, whereas CO2 flux or soil C/N ratio alone explained only 49% and 36% of the variability, respectively. Including water table level in the model, along with mineral nitrogen input, further improved the model with the explanatory proportion of variability in N2O flux increasing to 75%. Unlike grassland or cropland soils, forest soils were evidently nitrogen-limited, so water table level had no significant effect on N2O flux. Our proposed approach, which uses the product of soil-derived CO2 flux and the inverse of soil C/N ratio as a proxy for nitrogen mineralization, shows promise for estimating regional or global N2O fluxes from organic soils, although some further enhancements may be warranted. PMID:24798347

  17. Stability, Consistency and Performance of Distribution Entropy in Analysing Short Length Heart Rate Variability (HRV) Signal

    PubMed Central

    Karmakar, Chandan; Udhayakumar, Radhagayathri K.; Li, Peng; Venkatesh, Svetha; Palaniswami, Marimuthu

    2017-01-01

    Distribution entropy (DistEn) is a recently developed measure of complexity that is used to analyse heart rate variability (HRV) data. Its calculation requires two input parameters—the embedding dimension m, and the number of bins M which replaces the tolerance parameter r that is used by the existing approximation entropy (ApEn) and sample entropy (SampEn) measures. The performance of DistEn can also be affected by the data length N. In our previous studies, we have analyzed stability and performance of DistEn with respect to one parameter (m or M) or combination of two parameters (N and M). However, impact of varying all the three input parameters on DistEn is not yet studied. Since DistEn is predominantly aimed at analysing short length heart rate variability (HRV) signal, it is important to comprehensively study the stability, consistency and performance of the measure using multiple case studies. In this study, we examined the impact of changing input parameters on DistEn for synthetic and physiological signals. We also compared the variations of DistEn and performance in distinguishing physiological (Elderly from Young) and pathological (Healthy from Arrhythmia) conditions with ApEn and SampEn. The results showed that DistEn values are minimally affected by the variations of input parameters compared to ApEn and SampEn. DistEn also showed the most consistent and the best performance in differentiating physiological and pathological conditions with various of input parameters among reported complexity measures. In conclusion, DistEn is found to be the best measure for analysing short length HRV time series. PMID:28979215

  18. Input-variable sensitivity assessment for sediment transport relations

    NASA Astrophysics Data System (ADS)

    Fernández, Roberto; Garcia, Marcelo H.

    2017-09-01

    A methodology to assess input-variable sensitivity for sediment transport relations is presented. The Mean Value First Order Second Moment Method (MVFOSM) is applied to two bed load transport equations showing that it may be used to rank all input variables in terms of how their specific variance affects the overall variance of the sediment transport estimation. In sites where data are scarce or nonexistent, the results obtained may be used to (i) determine what variables would have the largest impact when estimating sediment loads in the absence of field observations and (ii) design field campaigns to specifically measure those variables for which a given transport equation is most sensitive; in sites where data are readily available, the results would allow quantifying the effect that the variance associated with each input variable has on the variance of the sediment transport estimates. An application of the method to two transport relations using data from a tropical mountain river in Costa Rica is implemented to exemplify the potential of the method in places where input data are limited. Results are compared against Monte Carlo simulations to assess the reliability of the method and validate its results. For both of the sediment transport relations used in the sensitivity analysis, accurate knowledge of sediment size was found to have more impact on sediment transport predictions than precise knowledge of other input variables such as channel slope and flow discharge.

  19. A Multifactor Approach to Research in Instructional Technology.

    ERIC Educational Resources Information Center

    Ragan, Tillman J.

    In a field such as instructional design, explanations of educational outcomes must necessarily consider multiple input variables. To adequately understand the contribution made by the independent variables, it is helpful to have a visual conception of how the input variables interrelate. Two variable models are adequately represented by a two…

  20. Mathematical modelling of methanogenic reactor start-up: Importance of volatile fatty acids degrading population.

    PubMed

    Jabłoński, Sławomir J; Łukaszewicz, Marcin

    2014-12-01

    Development of balanced community of microorganisms is one of the obligatory for stable anaerobic digestion. Application of mathematical models might be helpful in development of reliable procedures during the process start-up period. Yet, the accuracy of forecast depends on the quality of input and parameters. In this study, the specific anaerobic activity (SAA) tests were applied in order to estimate microbial community structure. Obtained data was applied as input conditions for mathematical model of anaerobic digestion. The initial values of variables describing the amount of acetate and propionate utilizing microorganisms could be calculated on the basis of SAA results. The modelling based on those optimized variables could successfully reproduce the behavior of a real system during the continuous fermentation. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Adaptive control of a jet turboshaft engine driving a variable pitch propeller using multiple models

    NASA Astrophysics Data System (ADS)

    Ahmadian, Narjes; Khosravi, Alireza; Sarhadi, Pouria

    2017-08-01

    In this paper, a multiple model adaptive control (MMAC) method is proposed for a gas turbine engine. The model of a twin spool turbo-shaft engine driving a variable pitch propeller includes various operating points. Variations in fuel flow and propeller pitch inputs produce different operating conditions which force the controller to be adopted rapidly. Important operating points are three idle, cruise and full thrust cases for the entire flight envelope. A multi-input multi-output (MIMO) version of second level adaptation using multiple models is developed. Also, stability analysis using Lyapunov method is presented. The proposed method is compared with two conventional first level adaptation and model reference adaptive control techniques. Simulation results for JetCat SPT5 turbo-shaft engine demonstrate the performance and fidelity of the proposed method.

  2. Quantity and Quality of Caregivers' Linguistic Input to 18-Month and 3-Year-Old Children Who Are Hard of Hearing.

    PubMed

    Ambrose, Sophie E; Walker, Elizabeth A; Unflat-Berry, Lauren M; Oleson, Jacob J; Moeller, Mary Pat

    2015-01-01

    The primary objective of this study was to examine the quantity and quality of caregiver talk directed to children who are hard of hearing (CHH) compared with children with normal hearing (CNH). For the CHH only, the study explored how caregiver input changed as a function of child age (18 months versus 3 years), which child and family factors contributed to variance in caregiver linguistic input at 18 months and 3 years, and how caregiver talk at 18 months related to child language outcomes at 3 years. Participants were 59 CNH and 156 children with bilateral, mild-to-severe hearing loss. When children were approximately 18 months and/or 3 years of age, caregivers and children participated in a 5-min semistructured, conversational interaction. Interactions were transcribed and coded for two features of caregiver input representing quantity (number of total utterances and number of total words) and four features representing quality (number of different words, mean length of utterance in morphemes, proportion of utterances that were high level, and proportion of utterances that were directing). In addition, at the 18-month visit, parents completed a standardized questionnaire regarding their child's communication development. At the 3-year visit, a clinician administered a standardized language measure. At the 18-month visit, the CHH were exposed to a greater proportion of directing utterances than the CNH. At the 3-year visit, there were significant differences between the CNH and CHH for number of total words and all four of the quality variables, with the CHH being exposed to fewer words and lower quality input. Caregivers generally provided higher quality input to CHH at the 3-year visit compared with the 18-month visit. At the 18-month visit, quantity variables, but not quality variables, were related to several child and family factors. At the 3-year visit, the variable most strongly related to caregiver input was child language. Longitudinal analyses indicated that quality, but not quantity, of caregiver linguistic input at 18 months was related to child language abilities at 3 years, with directing utterances accounting for significant unique variance in child language outcomes. Although caregivers of CHH increased their use of quality features of linguistic input over time, the differences when compared with CNH suggest that some caregivers may need additional support to provide their children with optimal language learning environments. This is particularly important given the relationships that were identified between quality features of caregivers' linguistic input and children's language abilities. Family supports should include a focus on developing a style that is conversational eliciting as opposed to directive.

  3. Bullet trajectory predicts the need for damage control: an artificial neural network model.

    PubMed

    Hirshberg, Asher; Wall, Matthew J; Mattox, Kenneth L

    2002-05-01

    Effective use of damage control in trauma hinges on an early decision to use it. Bullet trajectory has never been studied as a marker for damage control. We hypothesize that this decision can be predicted by an artificial neural network (ANN) model based on the bullet trajectory and the patient's blood pressure. A multilayer perceptron ANN predictive model was developed from a data set of 312 patients with single abdominal gunshot injuries. Input variables were the bullet path, trajectory patterns, and admission systolic pressure. The output variable was either a damage control laparotomy or intraoperative death. The best performing ANN was implemented on prospectively collected data from 34 patients. The model achieved a correct classification rate of 0.96 and area under the receiver operating characteristic curve of 0.94. External validation showed the model to have a sensitivity of 88% and specificity of 96%. Model implementation on the prospectively collected data had a correct classification rate of 0.91. Sensitivity analysis showed that systolic pressure, bullet path across the midline, and trajectory involving the right upper quadrant were the three most important input variables. Bullet trajectory is an important, hitherto unrecognized, factor that should be incorporated into the decision to use damage control.

  4. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE PAGES

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    2016-09-12

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  5. Variance-Based Sensitivity Analysis to Support Simulation-Based Design Under Uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Opgenoord, Max M. J.; Allaire, Douglas L.; Willcox, Karen E.

    Sensitivity analysis plays a critical role in quantifying uncertainty in the design of engineering systems. A variance-based global sensitivity analysis is often used to rank the importance of input factors, based on their contribution to the variance of the output quantity of interest. However, this analysis assumes that all input variability can be reduced to zero, which is typically not the case in a design setting. Distributional sensitivity analysis (DSA) instead treats the uncertainty reduction in the inputs as a random variable, and defines a variance-based sensitivity index function that characterizes the relative contribution to the output variance as amore » function of the amount of uncertainty reduction. This paper develops a computationally efficient implementation for the DSA formulation and extends it to include distributions commonly used in engineering design under uncertainty. Application of the DSA method to the conceptual design of a commercial jetliner demonstrates how the sensitivity analysis provides valuable information to designers and decision-makers on where and how to target uncertainty reduction efforts.« less

  6. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  7. Dependence of calculated postshock thermodynamic variables on vibrational equilibrium and input uncertainty

    DOE PAGES

    Campbell, Matthew Frederick; Owen, Kyle G.; Davidson, David F.; ...

    2017-01-30

    The purpose of this article is to explore the dependence of calculated postshock thermodynamic properties in shock tube experiments upon the vibrational state of the test gas and upon the uncertainties inherent to calculation inputs. This paper first offers a comparison between state variables calculated according to a Rankine–Hugoniot–equation-based algorithm, known as FROSH, and those derived from shock tube experiments on vibrationally nonequilibrated gases. It is shown that incorrect vibrational relaxation assumptions could lead to errors in temperature as large as 8% for 25% oxygen/argon mixtures at 3500 K. Following this demonstration, this article employs the algorithm to show themore » importance of correct vibrational equilibration assumptions, noting, for instance, that errors in temperature of up to about 2% at 3500 K may be generated for 10% nitrogen/argon mixtures if vibrational relaxation is not treated properly. Lastly, this article presents an extensive uncertainty analysis, showing that postshock temperatures can be calculated with root-of-sum-of-square errors of better than ±1% given sufficiently accurate experimentally measured input parameters.« less

  8. Flight dynamics analysis and simulation of heavy lift airships, volume 4. User's guide: Appendices

    NASA Technical Reports Server (NTRS)

    Emmen, R. D.; Tischler, M. B.

    1982-01-01

    This table contains all of the input variables to the three programs. The variables are arranged according to the name list groups in which they appear in the data files. The program name, subroutine name, definition and, where appropriate, a default input value and any restrictions are listed with each variable. The default input values are user supplied, not generated by the computer. These values remove a specific effect from the calculations, as explained in the table. The phrase "not used' indicates that a variable is not used in the calculations and are for identification purposes only. The engineering symbol, where it exists, is listed to assist the user in correlating these inputs with the discussion in the Technical Manual.

  9. Production Function Geometry with "Knightian" Total Product

    ERIC Educational Resources Information Center

    Truett, Dale B.; Truett, Lila J.

    2007-01-01

    Authors of principles and price theory textbooks generally illustrate short-run production using a total product curve that displays first increasing and then diminishing marginal returns to employment of the variable input(s). Although it seems reasonable that a temporary range of increasing returns to variable inputs will likely occur as…

  10. QKD Via a Quantum Wavelength Router Using Spatial Soliton

    NASA Astrophysics Data System (ADS)

    Kouhnavard, M.; Amiri, I. S.; Afroozeh, A.; Jalil, M. A.; Ali, J.; Yupapin, P. P.

    2011-05-01

    A system for continuous variable quantum key distribution via a wavelength router is proposed. The Kerr type of light in the nonlinear microring resonator (NMRR) induces the chaotic behavior. In this proposed system chaotic signals are generated by an optical soliton or Gaussian pulse within a NMRR system. The parameters, such as input power, MRRs radii and coupling coefficients can change and plays important role in determining the results in which the continuous signals are generated spreading over the spectrum. Large bandwidth signals of optical soliton are generated by the input pulse propagating within the MRRs, which is allowed to form the continuous wavelength or frequency with large tunable channel capacity. The continuous variable QKD is formed by using the localized spatial soliton pulses via a quantum router and networks. The selected optical spatial pulse can be used to perform the secure communication network. Here the entangled photon generated by chaotic signals has been analyzed. The continuous entangled photon is generated by using the polarization control unit incorporating into the MRRs, required to provide the continuous variable QKD. Results obtained have shown that the application of such a system for the simultaneous continuous variable quantum cryptography can be used in the mobile telephone hand set and networks. In this study frequency band of 500 MHz and 2.0 GHz and wavelengths of 775 nm, 2,325 nm and 1.55 μm can be obtained for QKD use with input optical soliton and Gaussian beam respectively.

  11. Modeling development of natural multi-sensory integration using neural self-organisation and probabilistic population codes

    NASA Astrophysics Data System (ADS)

    Bauer, Johannes; Dávila-Chacón, Jorge; Wermter, Stefan

    2015-10-01

    Humans and other animals have been shown to perform near-optimally in multi-sensory integration tasks. Probabilistic population codes (PPCs) have been proposed as a mechanism by which optimal integration can be accomplished. Previous approaches have focussed on how neural networks might produce PPCs from sensory input or perform calculations using them, like combining multiple PPCs. Less attention has been given to the question of how the necessary organisation of neurons can arise and how the required knowledge about the input statistics can be learned. In this paper, we propose a model of learning multi-sensory integration based on an unsupervised learning algorithm in which an artificial neural network learns the noise characteristics of each of its sources of input. Our algorithm borrows from the self-organising map the ability to learn latent-variable models of the input and extends it to learning to produce a PPC approximating a probability density function over the latent variable behind its (noisy) input. The neurons in our network are only required to perform simple calculations and we make few assumptions about input noise properties and tuning functions. We report on a neurorobotic experiment in which we apply our algorithm to multi-sensory integration in a humanoid robot to demonstrate its effectiveness and compare it to human multi-sensory integration on the behavioural level. We also show in simulations that our algorithm performs near-optimally under certain plausible conditions, and that it reproduces important aspects of natural multi-sensory integration on the neural level.

  12. Applications of information theory, genetic algorithms, and neural models to predict oil flow

    NASA Astrophysics Data System (ADS)

    Ludwig, Oswaldo; Nunes, Urbano; Araújo, Rui; Schnitman, Leizer; Lepikson, Herman Augusto

    2009-07-01

    This work introduces a new information-theoretic methodology for choosing variables and their time lags in a prediction setting, particularly when neural networks are used in non-linear modeling. The first contribution of this work is the Cross Entropy Function (XEF) proposed to select input variables and their lags in order to compose the input vector of black-box prediction models. The proposed XEF method is more appropriate than the usually applied Cross Correlation Function (XCF) when the relationship among the input and output signals comes from a non-linear dynamic system. The second contribution is a method that minimizes the Joint Conditional Entropy (JCE) between the input and output variables by means of a Genetic Algorithm (GA). The aim is to take into account the dependence among the input variables when selecting the most appropriate set of inputs for a prediction problem. In short, theses methods can be used to assist the selection of input training data that have the necessary information to predict the target data. The proposed methods are applied to a petroleum engineering problem; predicting oil production. Experimental results obtained with a real-world dataset are presented demonstrating the feasibility and effectiveness of the method.

  13. A stochastic model of input effectiveness during irregular gamma rhythms.

    PubMed

    Dumont, Grégory; Northoff, Georg; Longtin, André

    2016-02-01

    Gamma-band synchronization has been linked to attention and communication between brain regions, yet the underlying dynamical mechanisms are still unclear. How does the timing and amplitude of inputs to cells that generate an endogenously noisy gamma rhythm affect the network activity and rhythm? How does such "communication through coherence" (CTC) survive in the face of rhythm and input variability? We present a stochastic modelling approach to this question that yields a very fast computation of the effectiveness of inputs to cells involved in gamma rhythms. Our work is partly motivated by recent optogenetic experiments (Cardin et al. Nature, 459(7247), 663-667 2009) that tested the gamma phase-dependence of network responses by first stabilizing the rhythm with periodic light pulses to the interneurons (I). Our computationally efficient model E-I network of stochastic two-state neurons exhibits finite-size fluctuations. Using the Hilbert transform and Kuramoto index, we study how the stochastic phase of its gamma rhythm is entrained by external pulses. We then compute how this rhythmic inhibition controls the effectiveness of external input onto pyramidal (E) cells, and how variability shapes the window of firing opportunity. For transferring the time variations of an external input to the E cells, we find a tradeoff between the phase selectivity and depth of rate modulation. We also show that the CTC is sensitive to the jitter in the arrival times of spikes to the E cells, and to the degree of I-cell entrainment. We further find that CTC can occur even if the underlying deterministic system does not oscillate; quasicycle-type rhythms induced by the finite-size noise retain the basic CTC properties. Finally a resonance analysis confirms the relative importance of the I cell pacing for rhythm generation. Analysis of whole network behaviour, including computations of synchrony, phase and shifts in excitatory-inhibitory balance, can be further sped up by orders of magnitude using two coupled stochastic differential equations, one for each population. Our work thus yields a fast tool to numerically and analytically investigate CTC in a noisy context. It shows that CTC can be quite vulnerable to rhythm and input variability, which both decrease phase preference.

  14. The impact of 14nm photomask variability and uncertainty on computational lithography solutions

    NASA Astrophysics Data System (ADS)

    Sturtevant, John; Tejnil, Edita; Buck, Peter D.; Schulze, Steffen; Kalk, Franklin; Nakagawa, Kent; Ning, Guoxiang; Ackmann, Paul; Gans, Fritz; Buergel, Christian

    2013-09-01

    Computational lithography solutions rely upon accurate process models to faithfully represent the imaging system output for a defined set of process and design inputs. These models rely upon the accurate representation of multiple parameters associated with the scanner and the photomask. Many input variables for simulation are based upon designed or recipe-requested values or independent measurements. It is known, however, that certain measurement methodologies, while precise, can have significant inaccuracies. Additionally, there are known errors associated with the representation of certain system parameters. With shrinking total CD control budgets, appropriate accounting for all sources of error becomes more important, and the cumulative consequence of input errors to the computational lithography model can become significant. In this work, we examine via simulation, the impact of errors in the representation of photomask properties including CD bias, corner rounding, refractive index, thickness, and sidewall angle. The factors that are most critical to be accurately represented in the model are cataloged. CD bias values are based on state of the art mask manufacturing data and other variables changes are speculated, highlighting the need for improved metrology and communication between mask and OPC model experts. The simulations are done by ignoring the wafer photoresist model, and show the sensitivity of predictions to various model inputs associated with the mask. It is shown that the wafer simulations are very dependent upon the 1D/2D representation of the mask and for 3D, that the mask sidewall angle is a very sensitive factor influencing simulated wafer CD results.

  15. European nitrogen policies, nitrate in rivers and the use of the INCA model

    NASA Astrophysics Data System (ADS)

    Skeffington, R.

    This paper is concerned with nitrogen inputs to European catchments, how they are likely to change in future, and the implications for the INCA model. National N budgets show that the fifteen countries currently in the European Union (the EU-15 countries) probably have positive N balances - that is, N inputs exceed outputs. The major sources are atmospheric deposition, fertilisers and animal feed, the relative importance of which varies between countries. The magnitude of the fluxes which determine the transport and retention of N in catchments is also very variable in both space and time. The most important of these fluxes are parameterised directly or indirectly in the INCA Model, though it is doubtful whether the present version of the model is flexible enough to encompass short-term (daily) variations in inputs or longer-term (decadal) changes in soil parameters. As an aid to predicting future changes in deposition, international legislation relating to atmospheric N inputs and nitrate in rivers is reviewed briefly. Atmospheric N deposition and fertiliser use are likely to decrease over the next 10 years, but probably not sufficiently to balance national N budgets.

  16. Nonequilibrium air radiation (Nequair) program: User's manual

    NASA Technical Reports Server (NTRS)

    Park, C.

    1985-01-01

    A supplement to the data relating to the calculation of nonequilibrium radiation in flight regimes of aeroassisted orbital transfer vehicles contains the listings of the computer code NEQAIR (Nonequilibrium Air Radiation), its primary input data, and explanation of the user-supplied input variables. The user-supplied input variables are the thermodynamic variables of air at a given point, i.e., number densities of various chemical species, translational temperatures of heavy particles and electrons, and vibrational temperature. These thermodynamic variables do not necessarily have to be in thermodynamic equilibrium. The code calculates emission and absorption characteristics of air under these given conditions.

  17. Optimization of a GO2/GH2 Swirl Coaxial Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    1999-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) swirl coaxial injector element. The element is optimized in terms of design variables such as fuel pressure drop, DELTA P(sub f), oxidizer pressure drop, DELTA P(sub 0) combustor length, L(sub comb), and full cone swirl angle, theta, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w) injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 180 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Two examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface that includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio.

  18. Harmonize input selection for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  19. The extraction of simple relationships in growth factor-specific multiple-input and multiple-output systems in cell-fate decisions by backward elimination PLS regression.

    PubMed

    Akimoto, Yuki; Yugi, Katsuyuki; Uda, Shinsuke; Kudo, Takamasa; Komori, Yasunori; Kubota, Hiroyuki; Kuroda, Shinya

    2013-01-01

    Cells use common signaling molecules for the selective control of downstream gene expression and cell-fate decisions. The relationship between signaling molecules and downstream gene expression and cellular phenotypes is a multiple-input and multiple-output (MIMO) system and is difficult to understand due to its complexity. For example, it has been reported that, in PC12 cells, different types of growth factors activate MAP kinases (MAPKs) including ERK, JNK, and p38, and CREB, for selective protein expression of immediate early genes (IEGs) such as c-FOS, c-JUN, EGR1, JUNB, and FOSB, leading to cell differentiation, proliferation and cell death; however, how multiple-inputs such as MAPKs and CREB regulate multiple-outputs such as expression of the IEGs and cellular phenotypes remains unclear. To address this issue, we employed a statistical method called partial least squares (PLS) regression, which involves a reduction of the dimensionality of the inputs and outputs into latent variables and a linear regression between these latent variables. We measured 1,200 data points for MAPKs and CREB as the inputs and 1,900 data points for IEGs and cellular phenotypes as the outputs, and we constructed the PLS model from these data. The PLS model highlighted the complexity of the MIMO system and growth factor-specific input-output relationships of cell-fate decisions in PC12 cells. Furthermore, to reduce the complexity, we applied a backward elimination method to the PLS regression, in which 60 input variables were reduced to 5 variables, including the phosphorylation of ERK at 10 min, CREB at 5 min and 60 min, AKT at 5 min and JNK at 30 min. The simple PLS model with only 5 input variables demonstrated a predictive ability comparable to that of the full PLS model. The 5 input variables effectively extracted the growth factor-specific simple relationships within the MIMO system in cell-fate decisions in PC12 cells.

  20. Temperature variability is a key component in accurately forecasting the effects of climate change on pest phenology.

    PubMed

    Merrill, Scott C; Peairs, Frank B

    2017-02-01

    Models describing the effects of climate change on arthropod pest ecology are needed to help mitigate and adapt to forthcoming changes. Challenges arise because climate data are at resolutions that do not readily synchronize with arthropod biology. Here we explain how multiple sources of climate and weather data can be synthesized to quantify the effects of climate change on pest phenology. Predictions of phenological events differ substantially between models that incorporate scale-appropriate temperature variability and models that do not. As an illustrative example, we predicted adult emergence of a pest of sunflower, the sunflower stem weevil Cylindrocopturus adspersus (LeConte). Predictions of the timing of phenological events differed by an average of 11 days between models with different temperature variability inputs. Moreover, as temperature variability increases, developmental rates accelerate. Our work details a phenological modeling approach intended to help develop tools to plan for and mitigate the effects of climate change. Results show that selection of scale-appropriate temperature data is of more importance than selecting a climate change emission scenario. Predictions derived without appropriate temperature variability inputs will likely result in substantial phenological event miscalculations. Additionally, results suggest that increased temperature instability will lead to accelerated pest development. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  1. Evapotranspiration from nonuniform surfaces - A first approach for short-term numerical weather prediction

    NASA Technical Reports Server (NTRS)

    Wetzel, Peter J.; Chang, Jy-Tai

    1988-01-01

    Observations of surface heterogeneity of soil moisture from scales of meters to hundreds of kilometers are discussed, and a relationship between grid element size and soil moisture variability is presented. An evapotranspiration model is presented which accounts for the variability of soil moisture, standing surface water, and vegetation internal and stomatal resistance to moisture flow from the soil. The mean values and standard deviations of these parameters are required as input to the model. Tests of this model against field observations are reported, and extensive sensitivity tests are presented which explore the importance of including subgrid-scale variability in an evapotranspiration model.

  2. Using simulated 3D surface fuelbeds and terrestrial laser scan data to develop inputs to fire behavior models

    Treesearch

    Eric Rowell; E. Louise Loudermilk; Carl Seielstad; Joseph O' Brien

    2016-01-01

    Understanding fine-scale variability in understory fuels is increasingly important as physics-based fire behavior modelsdrive needs for higher-resolution data. Describing fuelbeds 3Dly is critical in determining vertical and horizontal distributions offuel elements and the mass, especially in frequently burned pine ecosystems where fine-scale...

  3. Preposition Accuracy on a Sentence Repetition Task in School Age Spanish-English Bilinguals

    ERIC Educational Resources Information Center

    Taliancich-Klinger, Casey L.; Bedore, Lisa M.; Pena, Elizabeth D.

    2018-01-01

    Preposition knowledge is important for academic success. The goal of this project was to examine how different variables such as English input and output, Spanish preposition score, mother education level, and age of English exposure (AoEE) may have played a role in children's preposition knowledge in English. 148 Spanish-English children between…

  4. Where did all the Nitrogen go? Use of Watershed-Scale Budgets to Quantify Nitrogen Inputs, Storages, and Losses.

    NASA Astrophysics Data System (ADS)

    Boyer, E. W.; Goodale, C. L.; Howarth, R. W.; VanBreemen, N.

    2001-12-01

    Inputs of nitrogen (N) to aquatic and terrestrial ecosystems have increased during recent decades, primarily from the production and use of fertilizers, the planting of N-fixing crops, and the combustion of fossil fuels. We present mass-balanced budgets of N for 16 catchments along a latitudinal profile from Maine to Virginia, which encompass a range of climatic variability and are major drainages to the coast of the North Atlantic Ocean. We quantify inputs of N to each catchment from atmospheric deposition, application of nitrogenous fertilizers, biological nitrogen fixation by crops and trees, and import of N in agricultural products (food and feed). We relate these input terms to losses of N (total, organic, and nitrate) in streamflow. The importance of the relative N sources to N exports varies widely by watershed and is related to land use. Atmospheric deposition was the largest source of N to the forested catchments of northern New England (e.g., Penobscot and Kennebec); import of N in food was the largest source of N to the more populated regions of southern New England (e.g., Charles and Blackstone); and agricultural inputs were the dominant N sources in the Mid-Atlantic region (e.g., Schuylkill and Potomac). In all catchments, N inputs greatly exceed outputs, implying additional loss terms (e.g., denitrification or volatilization and transport of animal wastes), or changes in internal N stores (e.g, accumulation of N in vegetation, soil, or groundwater). We use our N budgets and several modeling approaches to constrain estimates about the fate of this excess N, including estimates of N storage in accumulating woody biomass, N losses due to in-stream denitrification, and more. This work is an effort of the SCOPE Nitrogen Project.

  5. The effect of long-term changes in plant inputs on soil carbon stocks

    NASA Astrophysics Data System (ADS)

    Georgiou, K.; Li, Z.; Torn, M. S.

    2017-12-01

    Soil organic carbon (SOC) is the largest actively-cycling terrestrial reservoir of C and an integral component of thriving natural and managed ecosystems. C input interventions (e.g., litter removal or organic amendments) are common in managed landscapes and present an important decision for maintaining healthy soils in sustainable agriculture and forestry. Furthermore, climate and land-cover change can also affect the amount of plant C inputs that enter the soil through changes in plant productivity, allocation, and rooting depth. Yet, the processes that dictate the response of SOC to such changes in C inputs are poorly understood and inadequately represented in predictive models. Long-term litter manipulations are an invaluable resource for exploring key controls of SOC storage and validating model representations. Here we explore the response of SOC to long-term changes in plant C inputs across a range of biomes and soil types. We synthesize and analyze data from long-term litter manipulation field experiments, and focus our meta-analysis on changes to total SOC stocks, microbial biomass carbon, and mineral-associated (`protected') carbon pools and explore the relative contribution of above- versus below-ground C inputs. Our cross-site data comparison reveals that divergent SOC responses are observed between forest sites, particularly for treatments that increase C inputs to the soil. We explore trends among key variables (e.g., microbial biomass to SOC ratios) that inform soil C model representations. The assembled dataset is an important benchmark for evaluating process-based hypotheses and validating divergent model formulations.

  6. Infant perceptual development for faces and spoken words: An integrated approach

    PubMed Central

    Watson, Tamara L; Robbins, Rachel A; Best, Catherine T

    2014-01-01

    There are obvious differences between recognizing faces and recognizing spoken words or phonemes that might suggest development of each capability requires different skills. Recognizing faces and perceiving spoken language, however, are in key senses extremely similar endeavors. Both perceptual processes are based on richly variable, yet highly structured input from which the perceiver needs to extract categorically meaningful information. This similarity could be reflected in the perceptual narrowing that occurs within the first year of life in both domains. We take the position that the perceptual and neurocognitive processes by which face and speech recognition develop are based on a set of common principles. One common principle is the importance of systematic variability in the input as a source of information rather than noise. Experience of this variability leads to perceptual tuning to the critical properties that define individual faces or spoken words versus their membership in larger groupings of people and their language communities. We argue that parallels can be drawn directly between the principles responsible for the development of face and spoken language perception. PMID:25132626

  7. Electrical and Optical Activation of Mesoscale Neural Circuits with Implications for Coding.

    PubMed

    Millard, Daniel C; Whitmire, Clarissa J; Gollnick, Clare A; Rozell, Christopher J; Stanley, Garrett B

    2015-11-25

    Artificial activation of neural circuitry through electrical microstimulation and optogenetic techniques is important for both scientific discovery of circuit function and for engineered approaches to alleviate various disorders of the nervous system. However, evidence suggests that neural activity generated by artificial stimuli differs dramatically from normal circuit function, in terms of both the local neuronal population activity at the site of activation and the propagation to downstream brain structures. The precise nature of these differences and the implications for information processing remain unknown. Here, we used voltage-sensitive dye imaging of primary somatosensory cortex in the anesthetized rat in response to deflections of the facial vibrissae and electrical or optogenetic stimulation of thalamic neurons that project directly to the somatosensory cortex. Although the different inputs produced responses that were similar in terms of the average cortical activation, the variability of the cortical response was strikingly different for artificial versus sensory inputs. Furthermore, electrical microstimulation resulted in highly unnatural spatial activation of cortex, whereas optical input resulted in spatial cortical activation that was similar to that induced by sensory inputs. A thalamocortical network model suggested that observed differences could be explained by differences in the way in which artificial and natural inputs modulate the magnitude and synchrony of population activity. Finally, the variability structure in the response for each case strongly influenced the optimal inputs for driving the pathway from the perspective of an ideal observer of cortical activation when considered in the context of information transmission. Artificial activation of neural circuitry through electrical microstimulation and optogenetic techniques is important for both scientific discovery and clinical translation. However, neural activity generated by these artificial means differs dramatically from normal circuit function, both locally and in the propagation to downstream brain structures. The precise nature of these differences and the implications for information processing remain unknown. The significance of this work is in quantifying the differences, elucidating likely mechanisms underlying the differences, and determining the implications for information processing. Copyright © 2015 the authors 0270-6474/15/3515702-14$15.00/0.

  8. Environmental assessment of metal exposure to corals living in Castle Harbour, Bermuda

    USGS Publications Warehouse

    Prouty, N.G.; Goodkin, N.F.; Jones, R.; Lamborg, C.H.; Storlazzi, C.D.; Hughen, K.A.

    2013-01-01

    Environmental contamination in Castle Harbour, Bermuda, has been linked to the dissolution and leaching of contaminants from the adjacent marine landfill. This study expands the evidence for environmental impact of leachate from the landfill by quantitatively demonstrating elevated metal uptake over the last 30 years in corals growing in Castle Harbour. Coral Pb/Ca, Zn/Ca and Mn/Ca ratios and total Hg concentrations are elevated relative to an adjacent control site in John Smith's Bay. The temporal variability in the Castle Harbour coral records suggests that while the landfill has increased in size over the last 35 years, the dominant input of metals is through periodic leaching of contaminants from the municipal landfill and surrounding sediment. Elevated contaminants in the surrounding sediment suggest that resuspension is an important transport medium for transferring heavy metals to corals. Increased winds, particularly during the 1990s, were accompanied by higher coral metal composition at Castle Harbour. Coupled with wind-induced resuspension, interannual changes in sea level within the Harbour can lead to increased bioavailability of sediment-bound metals and subsequent coral metal assimilation. At John Smith's Bay, large scale convective mixing may be driving interannual metal variability in the coral record rather than impacts from land-based activities. Results from this study provide important insights into the coupling of natural variability and anthropogenic input of contaminants to the nearshore environment.

  9. Peer Educators and Close Friends as Predictors of Male College Students' Willingness to Prevent Rape

    ERIC Educational Resources Information Center

    Stein, Jerrold L.

    2007-01-01

    Astin's (1977, 1991, 1993) input-environment-outcome (I-E-O) model provided a conceptual framework for this study which measured 156 male college students' willingness to prevent rape (outcome variable). Predictor variables included personal attitudes (input variable), perceptions of close friends' attitudes toward rape and rape prevention…

  10. The Effects of a Change in the Variability of Irrigation Water

    NASA Astrophysics Data System (ADS)

    Lyon, Kenneth S.

    1983-10-01

    This paper examines the short-run effects upon several variables of an increase in the variability of an input. The measure of an increase in the variability is the "mean preserving spread" suggested by Rothschild and Stiglitz (1970). The variables examined are real income (utility), expected profits, expected output, the quantity used of the controllable input, and the shadow price of the stochastic input. Four striking features of the results follow: (1) The concepts that have been useful in summarizing deterministic comparative static results are nearly absent when an input is stochastic. (2) Most of the signs of the partial derivatives depend upon more than concavity of the utility and production functions. (3) If the utility function is not "too" risk averse, then the risk-neutral results hold for the risk-aversion case. (4) If the production function is Cobb-Douglas, then definite results are achieved if the utility function is linear or if the "degree of risk-aversion" is "small."

  11. Optimal allocation of testing resources for statistical simulations

    NASA Astrophysics Data System (ADS)

    Quintana, Carolina; Millwater, Harry R.; Singh, Gulshan; Golden, Patrick

    2015-07-01

    Statistical estimates from simulation involve uncertainty caused by the variability in the input random variables due to limited data. Allocating resources to obtain more experimental data of the input variables to better characterize their probability distributions can reduce the variance of statistical estimates. The methodology proposed determines the optimal number of additional experiments required to minimize the variance of the output moments given single or multiple constraints. The method uses multivariate t-distribution and Wishart distribution to generate realizations of the population mean and covariance of the input variables, respectively, given an amount of available data. This method handles independent and correlated random variables. A particle swarm method is used for the optimization. The optimal number of additional experiments per variable depends on the number and variance of the initial data, the influence of the variable in the output function and the cost of each additional experiment. The methodology is demonstrated using a fretting fatigue example.

  12. Gross Anatomical Study of the Nerve Supply of Genitourinary Structures in Female Mongrel Hound Dogs

    PubMed Central

    Gomez-Amaya, S. M.; Ruggieri, M. R.; Arias Serrato, S. A.; Massicotte, V. S.; Barbe, M. F.

    2014-01-01

    Summary Anatomical variations in lumbosacral plexus or nerves to genitourinary structures in dogs are under described, despite their importance during surgery and potential contributions to neuromuscular syndromes. Gross dissection of 16 female mongrel hound dogs showed frequent variations in lumbosacral plexus classification, sympathetic ganglia, ventral rami input to nerves innervating genitourinary structures and pudendal nerve (PdN) branching. Lumbosacral plexus classification types were mixed, rather than pure, in 13 (82%) of dogs. The genitofemoral nerve (GFN) originated from ventral ramus of L4 in 67% of nerves, differing from the expected L3. Considerable variability was seen in ventral rami origins of pelvic (PN) and Pd nerves, with new findings of L7 contributions to PN, joining S1 and S2 input (23% of sides in 11 dogs) or S1–S3 input (5%), and to PdN, joining S1–S2, unilaterally, in one dog. L7 input was confirmed using retrograde dye tracing methods. The PN also received CG1 contributions, bilaterally, in one dog. The PdN branched unusually in two dogs. Lumbosacral sympathetic ganglia had variant intra-, inter- and multisegmental connectivity in 6 (38%). Thus, the anatomy of mongrel dogs had higher variability than previously described for purebred dogs. Knowledge of this variant innervation during surgery could aid in the preservation of nerves and reduce risk of urinary and sexual dysfunctions. PMID:24730986

  13. Spike Triggered Covariance in Strongly Correlated Gaussian Stimuli

    PubMed Central

    Aljadeff, Johnatan; Segev, Ronen; Berry, Michael J.; Sharpee, Tatyana O.

    2013-01-01

    Many biological systems perform computations on inputs that have very large dimensionality. Determining the relevant input combinations for a particular computation is often key to understanding its function. A common way to find the relevant input dimensions is to examine the difference in variance between the input distribution and the distribution of inputs associated with certain outputs. In systems neuroscience, the corresponding method is known as spike-triggered covariance (STC). This method has been highly successful in characterizing relevant input dimensions for neurons in a variety of sensory systems. So far, most studies used the STC method with weakly correlated Gaussian inputs. However, it is also important to use this method with inputs that have long range correlations typical of the natural sensory environment. In such cases, the stimulus covariance matrix has one (or more) outstanding eigenvalues that cannot be easily equalized because of sampling variability. Such outstanding modes interfere with analyses of statistical significance of candidate input dimensions that modulate neuronal outputs. In many cases, these modes obscure the significant dimensions. We show that the sensitivity of the STC method in the regime of strongly correlated inputs can be improved by an order of magnitude or more. This can be done by evaluating the significance of dimensions in the subspace orthogonal to the outstanding mode(s). Analyzing the responses of retinal ganglion cells probed with Gaussian noise, we find that taking into account outstanding modes is crucial for recovering relevant input dimensions for these neurons. PMID:24039563

  14. Hydrology and grazing jointly control a large-river food web.

    PubMed

    Strayer, David L; Pace, Michael L; Caraco, Nina F; Cole, Jonathan J; Findlay, Stuart E G

    2008-01-01

    Inputs of fresh water and grazing both can control aquatic food webs, but little is known about the relative strengths of and interactions between these controls. We use long-term data on the food web of the freshwater Hudson River estuary to investigate the importance of, and interactions between, inputs of fresh water and grazing by the invasive zebra mussel (Dreissena polymorpha). Both freshwater inputs and zebra mussel grazing have strong, pervasive effects on the Hudson River food web. High flow tended to reduce population size in most parts of the food web. High grazing also reduced populations in the planktonic food web, but increased populations in the littoral food web, probably as a result of increases in water clarity. The influences of flow and zebra mussel grazing were roughly equal (i.e., within a factor of 2) for many variables over the period of our study. Zebra mussel grazing made phytoplankton less sensitive to freshwater inputs, but water clarity and the littoral food web more sensitive to freshwater inputs, showing that interactions between these two controlling factors can be strong and varied.

  15. Wildfire extent and severity correlated with annual streamflow distribution and timing in the Pacific Northwest, USA (1984-2005)

    Treesearch

    Zachary A. Holden; Charles H. Luce; Michael A. Crimmins; Penelope Morgan

    2011-01-01

    Climate change effects on wildfire occurrence have been attributed primarily to increases in temperatures causing earlier snowpack ablation and longer fire seasons. Variability in precipitation is also an important control on snowpack accumulation and, therefore, on timing of meltwater inputs. We evaluate the correlation of total area burned and area burned severely to...

  16. Economic Risk of Bee Pollination in Maine Wild Blueberry, Vaccinium angustifolium.

    PubMed

    Asare, Eric; Hoshide, Aaron K; Drummond, Francis A; Criner, George K; Chen, Xuan

    2017-10-01

    Recent pollinator declines highlight the importance of evaluating economic risk of agricultural systems heavily dependent on rented honey bees or native pollinators. Our study analyzed variability of native bees and honey bees, and the risks these pose to profitability of Maine's wild blueberry industry. We used cross-sectional data from organic, low-, medium-, and high-input wild blueberry producers in 1993, 1997-1998, 2005-2007, and from 2011 to 2015 (n = 162 fields). Data included native and honey bee densities (count/m2/min) and honey bee stocking densities (hives/ha). Blueberry fruit set, yield, and honey bee hive stocking density models were estimated. Fruit set is impacted about 1.6 times more by native bees than honey bees on a per bee basis. Fruit set significantly explained blueberry yield. Honey bee stocking density in fields predicted honey bee foraging densities. These three models were used in enterprise budgets for all four systems from on-farm surveys of 23 conventional and 12 organic producers (2012-2013). These budgets formed the basis of Monte Carlo simulations of production and profit. Stochastic dominance of net farm income (NFI) cumulative distribution functions revealed that if organic yields are high enough (2,345 kg/ha), organic systems are economically preferable to conventional systems. However, if organic yields are lower (724 kg/ha), it is riskier with higher variability of crop yield and NFI. Although medium-input systems are stochastically dominant with lower NFI variability compared with other conventional systems, the high-input system breaks even with the low-input system if honey bee hive rental prices triple in the future. © The Authors 2017. Published by Oxford University Press on behalf of Entomological Society of America.

  17. Social modulation of learned behavior by dopamine in the basal ganglia: insights from songbirds.

    PubMed

    Leblois, Arthur

    2013-06-01

    Dysfunction of the dopaminergic system leads to motor, cognitive, and motivational symptoms in brain disorders such as Parkinson's disease. The basal ganglia (BG) are involved in sensorimotor learning and receive a strong dopaminergic signal, shown to play an important role in social interactions. The function of the dopaminergic input to the BG in the integration of social cues during sensorimotor learning remains however largely unexplored. Songbirds use learned vocalizations to communicate during courtship and aggressive behaviors. Like language learning in humans, song learning strongly depends on social interactions. In songbirds, a specialized BG-thalamo-cortical loop devoted to song is particularly tractable for elucidating the signals carried by dopamine in the BG, and the function of dopamine signaling in mediating social cues during skill learning and execution. Here, I review experimental findings uncovering the physiological effects and function of the dopaminergic signal in the songbird BG, in light of our knowledge of the BG-dopamine interactions in mammals. Interestingly, the compact nature of the striato-pallidal circuits in birds led to new insight on the physiological effects of the dopaminergic input on the BG network as a whole. In singing birds, D1-like receptor agonist and antagonist can modulate the spectral variability of syllables bi-directionally, suggesting that social context-dependent changes in spectral variability are triggered by dopaminergic input through D1-like receptors. As variability is crucial for exploration during motor learning, but must be reduced after learning to optimize performance, I propose that, the dopaminergic input to the BG could be responsible for the social-dependent regulation of the exploration/exploitation balance in birdsong, and possibly in learned skills in other vertebrates. Copyright © 2012 Elsevier Ltd. All rights reserved.

  18. Dimensional reduction in sensorimotor systems: A framework for understanding muscle coordination of posture

    PubMed Central

    Ting, Lena H.

    2014-01-01

    The simple act of standing up is an important and essential motor behavior that most humans and animals achieve with ease. Yet, maintaining standing balance involves complex sensorimotor transformations that must continually integrate a large array of sensory inputs and coordinate multiple motor outputs to muscles throughout the body. Multiple, redundant local sensory signals are integrated to form an estimate of a few global, task-level variables important to postural control, such as body center of mass position and body orientation with respect to Earth-vertical. Evidence suggests that a limited set of muscle synergies, reflecting preferential sets of muscle activation patterns, are used to move task variables such as center of mass position in a predictable direction following a postural perturbations. We propose a hierarchal feedback control system that allows the nervous system the simplicity of performing goal-directed computations in task-variable space, while maintaining the robustness afforded by redundant sensory and motor systems. We predict that modulation of postural actions occurs in task-variable space, and in the associated transformations between the low-dimensional task-space and high-dimensional sensor and muscle spaces. Development of neuromechanical models that reflect these neural transformations between low and high-dimensional representations will reveal the organizational principles and constraints underlying sensorimotor transformations for balance control, and perhaps motor tasks in general. This framework and accompanying computational models could be used to formulate specific hypotheses about how specific sensory inputs and motor outputs are generated and altered following neural injury, sensory loss, or rehabilitation. PMID:17925254

  19. Probabilistic Structural Evaluation of Uncertainties in Radiator Sandwich Panel Design

    NASA Technical Reports Server (NTRS)

    Kuguoglu, Latife; Ludwiczak, Damian

    2006-01-01

    The Jupiter Icy Moons Orbiter (JIMO) Space System is part of the NASA's Prometheus Program. As part of the JIMO engineering team at NASA Glenn Research Center, the structural design of the JIMO Heat Rejection Subsystem (HRS) is evaluated. An initial goal of this study was to perform sensitivity analyses to determine the relative importance of the input variables on the structural responses of the radiator panel. The desire was to let the sensitivity analysis information identify the important parameters. The probabilistic analysis methods illustrated here support this objective. The probabilistic structural performance evaluation of a HRS radiator sandwich panel was performed. The radiator panel structural performance was assessed in the presence of uncertainties in the loading, fabrication process variables, and material properties. The stress and displacement contours of the deterministic structural analysis at mean probability was performed and results presented. It is followed by a probabilistic evaluation to determine the effect of the primitive variables on the radiator panel structural performance. Based on uncertainties in material properties, structural geometry and loading, the results of the displacement and stress analysis are used as an input file for the probabilistic analysis of the panel. The sensitivity of the structural responses, such as maximum displacement and maximum tensile and compressive stresses of the facesheet in x and y directions and maximum VonMises stresses of the tube, to the loading and design variables is determined under the boundary condition where all edges of the radiator panel are pinned. Based on this study, design critical material and geometric parameters of the considered sandwich panel are identified.

  20. Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD

    NASA Astrophysics Data System (ADS)

    Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.

    2018-05-01

    In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.

  1. Pharmaceutical manufacturing facility discharges can substantially increase the pharmaceutical load to U.S. wastewaters

    USGS Publications Warehouse

    Scott, Tia-Marie; Phillips, Patrick J.; Kolpin, Dana W.; Colella, Kaitlyn M.; Furlong, Edward T.; Foreman, William T.; Gray, James L.

    2018-01-01

    Discharges from pharmaceutical manufacturing facilities (PMFs) previously have been identified as important sources of pharmaceuticals to the environment. Yet few studies are available to establish the influence of PMFs on the pharmaceutical source contribution to wastewater treatment plants (WWTPs) and waterways at the national scale. Consequently, a national network of 13 WWTPs receiving PMF discharges, six WWTPs with no PMF input, and one WWTP that transitioned through a PMF closure were selected from across the United States to assess the influence of PMF inputs on pharmaceutical loading to WWTPs. Effluent samples were analyzed for 120 pharmaceuticals and pharmaceutical degradates. Of these, 33 pharmaceuticals had concentrations substantially higher in PMF-influenced effluent (maximum 555,000 ng/L) compared to effluent from control sites (maximum 175 ng/L). Concentrations in WWTP receiving PMF input are variable, as discharges from PMFs are episodic, indicating that production activities can vary substantially over relatively short (several months) periods and have the potential to rapidly transition to other pharmaceutical products. Results show that PMFs are an important, national-scale source of pharmaceuticals to the environment.

  2. Has competition increased hospital technical efficiency?

    PubMed

    Lee, Keon-Hyung; Park, Jungwon; Lim, Seunghoo; Park, Sang-Chul

    2015-01-01

    Hospital competition and managed care have affected the hospital industry in various ways including technical efficiency. Hospital efficiency has become an important topic, and it is important to properly measure hospital efficiency in order to evaluate the impact of policies on the hospital industry. The primary independent variable is hospital competition. By using the 2001-2004 inpatient discharge data from Florida, we calculate the degree of hospital competition in Florida for 4 years. Hospital efficiency scores are developed using the Data Envelopment Analysis and by using the selected input and output variables from the American Hospital Association's Annual Survey of Hospitals for those acute care general hospitals in Florida. By using the hospital efficiency score as a dependent variable, we analyze the effects of hospital competition on hospital efficiency from 2001 to 2004 and find that when a hospital was located in a less competitive market in 2003, its technical efficiency score was lower than those in a more competitive market.

  3. Sensitivity analysis of radionuclides atmospheric dispersion following the Fukushima accident

    NASA Astrophysics Data System (ADS)

    Girard, Sylvain; Korsakissok, Irène; Mallet, Vivien

    2014-05-01

    Atmospheric dispersion models are used in response to accidental releases with two purposes: - minimising the population exposure during the accident; - complementing field measurements for the assessment of short and long term environmental and sanitary impacts. The predictions of these models are subject to considerable uncertainties of various origins. Notably, input data, such as meteorological fields or estimations of emitted quantities as function of time, are highly uncertain. The case studied here is the atmospheric release of radionuclides following the Fukushima Daiichi disaster. The model used in this study is Polyphemus/Polair3D, from which derives IRSN's operational long distance atmospheric dispersion model ldX. A sensitivity analysis was conducted in order to estimate the relative importance of a set of identified uncertainty sources. The complexity of this task was increased by four characteristics shared by most environmental models: - high dimensional inputs; - correlated inputs or inputs with complex structures; - high dimensional output; - multiplicity of purposes that require sophisticated and non-systematic post-processing of the output. The sensitivities of a set of outputs were estimated with the Morris screening method. The input ranking was highly dependent on the considered output. Yet, a few variables, such as horizontal diffusion coefficient or clouds thickness, were found to have a weak influence on most of them and could be discarded from further studies. The sensitivity analysis procedure was also applied to indicators of the model performance computed on a set of gamma dose rates observations. This original approach is of particular interest since observations could be used later to calibrate the input variables probability distributions. Indeed, only the variables that are influential on performance scores are likely to allow for calibration. An indicator based on emission peaks time matching was elaborated in order to complement classical statistical scores which were dominated by deposit dose rates and almost insensitive to lower atmosphere dose rates. The substantial sensitivity of these performance indicators is auspicious for future calibration attempts and indicates that the simple perturbations used here may be sufficient to represent an essential part of the overall uncertainty.

  4. A hybrid machine learning model to predict and visualize nitrate concentration throughout the Central Valley aquifer, California, USA

    USGS Publications Warehouse

    Ransom, Katherine M.; Nolan, Bernard T.; Traum, Jonathan A.; Faunt, Claudia; Bell, Andrew M.; Gronberg, Jo Ann M.; Wheeler, David C.; Zamora, Celia; Jurgens, Bryant; Schwarz, Gregory E.; Belitz, Kenneth; Eberts, Sandra; Kourakos, George; Harter, Thomas

    2017-01-01

    Intense demand for water in the Central Valley of California and related increases in groundwater nitrate concentration threaten the sustainability of the groundwater resource. To assess contamination risk in the region, we developed a hybrid, non-linear, machine learning model within a statistical learning framework to predict nitrate contamination of groundwater to depths of approximately 500 m below ground surface. A database of 145 predictor variables representing well characteristics, historical and current field and landscape-scale nitrogen mass balances, historical and current land use, oxidation/reduction conditions, groundwater flow, climate, soil characteristics, depth to groundwater, and groundwater age were assigned to over 6000 private supply and public supply wells measured previously for nitrate and located throughout the study area. The boosted regression tree (BRT) method was used to screen and rank variables to predict nitrate concentration at the depths of domestic and public well supplies. The novel approach included as predictor variables outputs from existing physically based models of the Central Valley. The top five most important predictor variables included two oxidation/reduction variables (probability of manganese concentration to exceed 50 ppb and probability of dissolved oxygen concentration to be below 0.5 ppm), field-scale adjusted unsaturated zone nitrogen input for the 1975 time period, average difference between precipitation and evapotranspiration during the years 1971–2000, and 1992 total landscape nitrogen input. Twenty-five variables were selected for the final model for log-transformed nitrate. In general, increasing probability of anoxic conditions and increasing precipitation relative to potential evapotranspiration had a corresponding decrease in nitrate concentration predictions. Conversely, increasing 1975 unsaturated zone nitrogen leaching flux and 1992 total landscape nitrogen input had an increasing relative impact on nitrate predictions. Three-dimensional visualization indicates that nitrate predictions depend on the probability of anoxic conditions and other factors, and that nitrate predictions generally decreased with increasing groundwater age.

  5. Estimating the Uncertain Mathematical Structure of Hydrological Model via Bayesian Data Assimilation

    NASA Astrophysics Data System (ADS)

    Bulygina, N.; Gupta, H.; O'Donell, G.; Wheater, H.

    2008-12-01

    The structure of hydrological model at macro scale (e.g. watershed) is inherently uncertain due to many factors, including the lack of a robust hydrological theory at the macro scale. In this work, we assume that a suitable conceptual model for the hydrologic system has already been determined - i.e., the system boundaries have been specified, the important state variables and input and output fluxes to be included have been selected, and the major hydrological processes and geometries of their interconnections have been identified. The structural identification problem then is to specify the mathematical form of the relationships between the inputs, state variables and outputs, so that a computational model can be constructed for making simulations and/or predictions of system input-state-output behaviour. We show how Bayesian data assimilation can be used to merge both prior beliefs in the form of pre-assumed model equations with information derived from the data to construct a posterior model. The approach, entitled Bayesian Estimation of Structure (BESt), is used to estimate a hydrological model for a small basin in England, at hourly time scales, conditioned on the assumption of 3-dimensional state - soil moisture storage, fast and slow flow stores - conceptual model structure. Inputs to the system are precipitation and potential evapotranspiration, and outputs are actual evapotranspiration and streamflow discharge. Results show the difference between prior and posterior mathematical structures, as well as provide prediction confidence intervals that reflect three types of uncertainty: due to initial conditions, due to input and due to mathematical structure.

  6. Artificial neural network model for ozone concentration estimation and Monte Carlo analysis

    NASA Astrophysics Data System (ADS)

    Gao, Meng; Yin, Liting; Ning, Jicai

    2018-07-01

    Air pollution in urban atmosphere directly affects public-health; therefore, it is very essential to predict air pollutant concentrations. Air quality is a complex function of emissions, meteorology and topography, and artificial neural networks (ANNs) provide a sound framework for relating these variables. In this study, we investigated the feasibility of using ANN model with meteorological parameters as input variables to predict ozone concentration in the urban area of Jinan, a metropolis in Northern China. We firstly found that the architecture of network of neurons had little effect on the predicting capability of ANN model. A parsimonious ANN model with 6 routinely monitored meteorological parameters and one temporal covariate (the category of day, i.e. working day, legal holiday and regular weekend) as input variables was identified, where the 7 input variables were selected following the forward selection procedure. Compared with the benchmarking ANN model with 9 meteorological and photochemical parameters as input variables, the predicting capability of the parsimonious ANN model was acceptable. Its predicting capability was also verified in term of warming success ratio during the pollution episodes. Finally, uncertainty and sensitivity analysis were also performed based on Monte Carlo simulations (MCS). It was concluded that the ANN could properly predict the ambient ozone level. Maximum temperature, atmospheric pressure, sunshine duration and maximum wind speed were identified as the predominate input variables significantly influencing the prediction of ambient ozone concentrations.

  7. Correction of I/Q channel errors without calibration

    DOEpatents

    Doerry, Armin W.; Tise, Bertice L.

    2002-01-01

    A method of providing a balanced demodular output for a signal such as a Doppler radar having an analog pulsed input; includes adding a variable phase shift as a function of time to the input signal, applying the phase shifted input signal to a demodulator; and generating a baseband signal from the input signal. The baseband signal is low-pass filtered and converted to a digital output signal. By removing the variable phase shift from the digital output signal, a complex data output is formed that is representative of the output of a balanced demodulator.

  8. Modelling the meteorological forest fire niche in heterogeneous pyrologic conditions.

    PubMed

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition.

  9. Modelling the Meteorological Forest Fire Niche in Heterogeneous Pyrologic Conditions

    PubMed Central

    De Angelis, Antonella; Ricotta, Carlo; Conedera, Marco; Pezzatti, Gianni Boris

    2015-01-01

    Fire regimes are strongly related to weather conditions that directly and indirectly influence fire ignition and propagation. Identifying the most important meteorological fire drivers is thus fundamental for daily fire risk forecasting. In this context, several fire weather indices have been developed focussing mainly on fire-related local weather conditions and fuel characteristics. The specificity of the conditions for which fire danger indices are developed makes its direct transfer and applicability problematic in different areas or with other fuel types. In this paper we used the low-to-intermediate fire-prone region of Canton Ticino as a case study to develop a new daily fire danger index by implementing a niche modelling approach (Maxent). In order to identify the most suitable weather conditions for fires, different combinations of input variables were tested (meteorological variables, existing fire danger indices or a combination of both). Our findings demonstrate that such combinations of input variables increase the predictive power of the resulting index and surprisingly even using meteorological variables only allows similar or better performances than using the complex Canadian Fire Weather Index (FWI). Furthermore, the niche modelling approach based on Maxent resulted in slightly improved model performance and in a reduced number of selected variables with respect to the classical logistic approach. Factors influencing final model robustness were the number of fire events considered and the specificity of the meteorological conditions leading to fire ignition. PMID:25679957

  10. Wood phenology, not carbon input, controls the interannual variability of wood growth in a temperate oak forest.

    PubMed

    Delpierre, Nicolas; Berveiller, Daniel; Granda, Elena; Dufrêne, Eric

    2016-04-01

    Although the analysis of flux data has increased our understanding of the interannual variability of carbon inputs into forest ecosystems, we still know little about the determinants of wood growth. Here, we aimed to identify which drivers control the interannual variability of wood growth in a mesic temperate deciduous forest. We analysed a 9-yr time series of carbon fluxes and aboveground wood growth (AWG), reconstructed at a weekly time-scale through the combination of dendrometer and wood density data. Carbon inputs and AWG anomalies appeared to be uncorrelated from the seasonal to interannual scales. More than 90% of the interannual variability of AWG was explained by a combination of the growth intensity during a first 'critical period' of the wood growing season, occurring close to the seasonal maximum, and the timing of the first summer growth halt. Both atmospheric and soil water stress exerted a strong control on the interannual variability of AWG at the study site, despite its mesic conditions, whilst not affecting carbon inputs. Carbon sink activity, not carbon inputs, determined the interannual variations in wood growth at the study site. Our results provide a functional understanding of the dependence of radial growth on precipitation observed in dendrological studies. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.

  11. Input Variability Facilitates Unguided Subcategory Learning in Adults

    PubMed Central

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Asbjørnsen, Arve E.

    2015-01-01

    Purpose This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Results Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. Conclusions The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition. PMID:25680081

  12. Input Variability Facilitates Unguided Subcategory Learning in Adults.

    PubMed

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E

    2015-06-01

    This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half of the participants were familiarized with 32 different root words in a high-variability condition. The other half were familiarized with 16 different root words, each repeated twice for a total of 32 presentations in a high-repetition condition. Participants were tested on untrained members of the category to assess generalization. Familiarization and testing was completed 2 additional times. Only participants in the high-variability group showed evidence of learning after an initial period of familiarization. Participants in the high-repetition group were able to learn after additional input. Both groups benefited when words included 2 cues to gender compared to a single cue. The results demonstrate that the degree of input variability can influence learners' ability to generalize a grammatical subcategory (noun gender) from a natural language. In addition, the presence of multiple cues to linguistic subcategory facilitated learning independent of variability condition.

  13. Latin Hypercube Sampling (LHS) UNIX Library/Standalone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2004-05-13

    The LHS UNIX Library/Standalone software provides the capability to draw random samples from over 30 distribution types. It performs the sampling by a stratified sampling method called Latin Hypercube Sampling (LHS). Multiple distributions can be sampled simultaneously, with user-specified correlations amongst the input distributions, LHS UNIX Library/ Standalone provides a way to generate multi-variate samples. The LHS samples can be generated either as a callable library (e.g., from within the DAKOTA software framework) or as a standalone capability. LHS UNIX Library/Standalone uses the Latin Hypercube Sampling method (LHS) to generate samples. LHS is a constrained Monte Carlo sampling scheme. Inmore » LHS, the range of each variable is divided into non-overlapping intervals on the basis of equal probability. A sample is selected at random with respect to the probability density in each interval, If multiple variables are sampled simultaneously, then values obtained for each are paired in a random manner with the n values of the other variables. In some cases, the pairing is restricted to obtain specified correlations amongst the input variables. Many simulation codes have input parameters that are uncertain and can be specified by a distribution, To perform uncertainty analysis and sensitivity analysis, random values are drawn from the input parameter distributions, and the simulation is run with these values to obtain output values. If this is done repeatedly, with many input samples drawn, one can build up a distribution of the output as well as examine correlations between input and output variables.« less

  14. An evaluation of unsupervised and supervised learning algorithms for clustering landscape types in the United States

    USGS Publications Warehouse

    Wendel, Jochen; Buttenfield, Barbara P.; Stanislawski, Larry V.

    2016-01-01

    Knowledge of landscape type can inform cartographic generalization of hydrographic features, because landscape characteristics provide an important geographic context that affects variation in channel geometry, flow pattern, and network configuration. Landscape types are characterized by expansive spatial gradients, lacking abrupt changes between adjacent classes; and as having a limited number of outliers that might confound classification. The US Geological Survey (USGS) is exploring methods to automate generalization of features in the National Hydrography Data set (NHD), to associate specific sequences of processing operations and parameters with specific landscape characteristics, thus obviating manual selection of a unique processing strategy for every NHD watershed unit. A chronology of methods to delineate physiographic regions for the United States is described, including a recent maximum likelihood classification based on seven input variables. This research compares unsupervised and supervised algorithms applied to these seven input variables, to evaluate and possibly refine the recent classification. Evaluation metrics for unsupervised methods include the Davies–Bouldin index, the Silhouette index, and the Dunn index as well as quantization and topographic error metrics. Cross validation and misclassification rate analysis are used to evaluate supervised classification methods. The paper reports the comparative analysis and its impact on the selection of landscape regions. The compared solutions show problems in areas of high landscape diversity. There is some indication that additional input variables, additional classes, or more sophisticated methods can refine the existing classification.

  15. Elemental mercury concentrations and fluxes in the tropical atmosphere and ocean.

    PubMed

    Soerensen, Anne L; Mason, Robert P; Balcom, Prentiss H; Jacob, Daniel J; Zhang, Yanxu; Kuss, Joachim; Sunderland, Elsie M

    2014-10-07

    Air-sea exchange of elemental mercury (Hg(0)) is a critical component of the global biogeochemical Hg cycle. To better understand variability in atmospheric and oceanic Hg(0), we collected high-resolution measurements across large gradients in seawater temperature, salinity, and productivity in the Pacific Ocean (20°N-15°S). We modeled surface ocean Hg inputs and losses using an ocean general circulation model (MITgcm) and an atmospheric chemical transport model (GEOS-Chem). Observed surface seawater Hg(0) was much more variable than atmospheric concentrations. Peak seawater Hg(0) concentrations (∼ 130 fM) observed in the Pacific intertropical convergence zone (ITCZ) were ∼ 3-fold greater than surrounding areas (∼ 50 fM). This is similar to observations from the Atlantic Ocean. Peak evasion in the northern Pacific ITCZ was four times higher than surrounding regions and located at the intersection of high wind speeds and elevated seawater Hg(0). Modeling results show that high Hg inputs from enhanced precipitation in the ITCZ combined with the shallow ocean mixed layer in this region drive elevated seawater Hg(0) concentrations. Modeled seawater Hg(0) concentrations reproduce observed peaks in the ITCZ of both the Atlantic and Pacific Oceans but underestimate its magnitude, likely due to insufficient deep convective scavenging of oxidized Hg from the upper troposphere. Our results demonstrate the importance of scavenging of reactive mercury in the upper atmosphere driving variability in seawater Hg(0) and net Hg inputs to biologically productive regions of the tropical ocean.

  16. Optimization of a GO2/GH2 Impinging Injector Element

    NASA Technical Reports Server (NTRS)

    Tucker, P. Kevin; Shyy, Wei; Vaidyanathan, Rajkumar

    2001-01-01

    An injector optimization methodology, method i, is used to investigate optimal design points for a gaseous oxygen/gaseous hydrogen (GO2/GH2) impinging injector element. The unlike impinging element, a fuel-oxidizer- fuel (F-O-F) triplet, is optimized in terms of design variables such as fuel pressure drop, (Delta)P(sub f), oxidizer pressure drop, (Delta)P(sub o), combustor length, L(sub comb), and impingement half-angle, alpha, for a given mixture ratio and chamber pressure. Dependent variables such as energy release efficiency, ERE, wall heat flux, Q(sub w), injector heat flux, Q(sub inj), relative combustor weight, W(sub rel), and relative injector cost, C(sub rel), are calculated and then correlated with the design variables. An empirical design methodology is used to generate these responses for 163 combinations of input variables. Method i is then used to generate response surfaces for each dependent variable. Desirability functions based on dependent variable constraints are created and used to facilitate development of composite response surfaces representing some, or all, of the five dependent variables in terms of the input variables. Three examples illustrating the utility and flexibility of method i are discussed in detail. First, joint response surfaces are constructed by sequentially adding dependent variables. Optimum designs are identified after addition of each variable and the effect each variable has on the design is shown. This stepwise demonstration also highlights the importance of including variables such as weight and cost early in the design process. Secondly, using the composite response surface which includes all five dependent variables, unequal weights are assigned to emphasize certain variables relative to others. Here, method i is used to enable objective trade studies on design issues such as component life and thrust to weight ratio. Finally, specific variable weights are further increased to illustrate the high marginal cost of realizing the last increment of injector performance and thruster weight.

  17. Role of vegetation in interplay of climate, soil and groundwater recharge in a global dataset

    NASA Astrophysics Data System (ADS)

    Kim, J. H.; Jackson, R. B.

    2010-12-01

    Groundwater is an essential resource for people and ecosystems worldwide. Our capacity to ameliorate predicted global water shortages and to maintain sustainable water supplies depend on a better understanding of the controls of recharge and how vegetation change may affect recharge mechanisms. The goals of this study are to quantify the importance of vegetation as a dominant control on recharge globally and to compare the importance of vegetation with other hydrologically important variables, including climate and soil. We based our global analysis on > 500 recharge estimates from the literature that contained information on vegetation, soil and climate or location. Plant functional types significantly affected groundwater recharge rates substantially. After climatic factors (water inputs, PET, and seasonality), vegetation types explained about 15% of the residuals in the dataset. Across all climatic factors, croplands had the highest recharge rates, followed by grasslands, scrublands and woodlands (average recharge: 75, 63, 30, 22 mm/yr respectively). Recharge under woodlands showed the most nonlinear response to water inputs. Differences in recharge between the vegetation types were more exaggerated at arid climates and in clay soils, indicating greater biological control on soil water fluxes in these conditions. Our results shows that vegetation greatly affects recharge rates globally and alters relationship between recharge and physical variables allowing us to better predict recharge rates globally.

  18. Sediment budget analysis of slope channel coupling and in-channel sediment storage in an upland catchment, southeastern Australia

    NASA Astrophysics Data System (ADS)

    Smith, Hugh G.; Dragovich, Deirdre

    2008-11-01

    Slope-channel coupling and in-channel sediment storage can be important factors that influence sediment delivery through catchments. Sediment budgets offer an appropriate means to assess the role of these factors by quantifying the various components in the catchment sediment transfer system. In this study a fine (< 63 µm) sediment budget was developed for a 1.64-km 2 gullied upland catchment in southeastern Australia. A process-based approach was adopted that involved detailed monitoring of hillslope and bank erosion, channel change, and suspended sediment output in conjunction with USLE-based hillslope erosion estimation and sediment source tracing using 137Cs and 210Pb ex. The sediment budget developed from these datasets indicated channel banks accounted for an estimated 80% of total sediment inputs. Valley floor and in-channel sediment storage accounted for 53% of inputs, with the remaining 47% being discharged from the catchment outlet. Estimated hillslope sediment input to channels was low (5.7 t) for the study period compared to channel bank input (41.6 t). However an estimated 56% of eroded hillslope sediment reached channels, suggesting a greater level of coupling between the two subsystems than was apparent from comparison of sediment source inputs. Evidently the interpretation of variability in catchment sediment yield is largely dependent on the dynamics of sediment supply and storage in channels in response to patterns of rainfall and discharge. This was reflected in the sediment delivery ratios (SDR) for individual measurement intervals, which ranged from 1 to 153%. Bank sediment supply during low rainfall periods was reduced but ongoing from subaerial processes delivering sediment to channels, resulting in net accumulation on the channel bed with insufficient flow to transport this material to the catchment outlet. Following the higher flow period in spring of the first year of monitoring, the sediment supplied to channels during this interval was removed as well as an estimated 72% of the sediment accumulated on the channel bed since the start of the study period. Given the seasonal and drought-dependent variability in storage and delivery, the period of monitoring may have an important influence on the overall SDR. On the basis of these findings, this study highlights the potential importance of sediment dynamics in channels for determining contemporary sediment yields from small gullied upland catchments in southeastern Australia.

  19. Speed control system for an access gate

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bzorgi, Fariborz M

    2012-03-20

    An access control apparatus for an access gate. The access gate typically has a rotator that is configured to rotate around a rotator axis at a first variable speed in a forward direction. The access control apparatus may include a transmission that typically has an input element that is operatively connected to the rotator. The input element is generally configured to rotate at an input speed that is proportional to the first variable speed. The transmission typically also has an output element that has an output speed that is higher than the input speed. The input element and the outputmore » element may rotate around a common transmission axis. A retardation mechanism may be employed. The retardation mechanism is typically configured to rotate around a retardation mechanism axis. Generally the retardation mechanism is operatively connected to the output element of the transmission and is configured to retard motion of the access gate in the forward direction when the first variable speed is above a control-limit speed. In many embodiments the transmission axis and the retardation mechanism axis are substantially co-axial. Some embodiments include a freewheel/catch mechanism that has an input connection that is operatively connected to the rotator. The input connection may be configured to engage an output connection when the rotator is rotated at the first variable speed in a forward direction and configured for substantially unrestricted rotation when the rotator is rotated in a reverse direction opposite the forward direction. The input element of the transmission is typically operatively connected to the output connection of the freewheel/catch mechanism.« less

  20. ChIP-chip versus ChIP-seq: Lessons for experimental design and data analysis

    PubMed Central

    2011-01-01

    Background Chromatin immunoprecipitation (ChIP) followed by microarray hybridization (ChIP-chip) or high-throughput sequencing (ChIP-seq) allows genome-wide discovery of protein-DNA interactions such as transcription factor bindings and histone modifications. Previous reports only compared a small number of profiles, and little has been done to compare histone modification profiles generated by the two technologies or to assess the impact of input DNA libraries in ChIP-seq analysis. Here, we performed a systematic analysis of a modENCODE dataset consisting of 31 pairs of ChIP-chip/ChIP-seq profiles of the coactivator CBP, RNA polymerase II (RNA PolII), and six histone modifications across four developmental stages of Drosophila melanogaster. Results Both technologies produce highly reproducible profiles within each platform, ChIP-seq generally produces profiles with a better signal-to-noise ratio, and allows detection of more peaks and narrower peaks. The set of peaks identified by the two technologies can be significantly different, but the extent to which they differ varies depending on the factor and the analysis algorithm. Importantly, we found that there is a significant variation among multiple sequencing profiles of input DNA libraries and that this variation most likely arises from both differences in experimental condition and sequencing depth. We further show that using an inappropriate input DNA profile can impact the average signal profiles around genomic features and peak calling results, highlighting the importance of having high quality input DNA data for normalization in ChIP-seq analysis. Conclusions Our findings highlight the biases present in each of the platforms, show the variability that can arise from both technology and analysis methods, and emphasize the importance of obtaining high quality and deeply sequenced input DNA libraries for ChIP-seq analysis. PMID:21356108

  1. New type side weir discharge coefficient simulation using three novel hybrid adaptive neuro-fuzzy inference systems

    NASA Astrophysics Data System (ADS)

    Bonakdari, Hossein; Zaji, Amir Hossein

    2018-03-01

    In many hydraulic structures, side weirs have a critical role. Accurately predicting the discharge coefficient is one of the most important stages in the side weir design process. In the present paper, a new high efficient side weir is investigated. To simulate the discharge coefficient of these side weirs, three novel soft computing methods are used. The process includes modeling the discharge coefficient with the hybrid Adaptive Neuro-Fuzzy Interface System (ANFIS) and three optimization algorithms, namely Differential Evaluation (ANFIS-DE), Genetic Algorithm (ANFIS-GA) and Particle Swarm Optimization (ANFIS-PSO). In addition, sensitivity analysis is done to find the most efficient input variables for modeling the discharge coefficient of these types of side weirs. According to the results, the ANFIS method has higher performance when using simpler input variables. In addition, the ANFIS-DE with RMSE of 0.077 has higher performance than the ANFIS-GA and ANFIS-PSO methods with RMSE of 0.079 and 0.096, respectively.

  2. Probabilistic analysis of a materially nonlinear structure

    NASA Technical Reports Server (NTRS)

    Millwater, H. R.; Wu, Y.-T.; Fossum, A. F.

    1990-01-01

    A probabilistic finite element program is used to perform probabilistic analysis of a materially nonlinear structure. The program used in this study is NESSUS (Numerical Evaluation of Stochastic Structure Under Stress), under development at Southwest Research Institute. The cumulative distribution function (CDF) of the radial stress of a thick-walled cylinder under internal pressure is computed and compared with the analytical solution. In addition, sensitivity factors showing the relative importance of the input random variables are calculated. Significant plasticity is present in this problem and has a pronounced effect on the probabilistic results. The random input variables are the material yield stress and internal pressure with Weibull and normal distributions, respectively. The results verify the ability of NESSUS to compute the CDF and sensitivity factors of a materially nonlinear structure. In addition, the ability of the Advanced Mean Value (AMV) procedure to assess the probabilistic behavior of structures which exhibit a highly nonlinear response is shown. Thus, the AMV procedure can be applied with confidence to other structures which exhibit nonlinear behavior.

  3. Preposition accuracy on a sentence repetition task in school age Spanish-English bilinguals.

    PubMed

    Taliancich-Klinger, Casey L; Bedore, Lisa M; Peña, Elizabeth D

    2018-01-01

    Preposition knowledge is important for academic success. The goal of this project was to examine how different variables such as English input and output, Spanish preposition score, mother education level, and age of English exposure (AoEE) may have played a role in children's preposition knowledge in English. 148 Spanish-English children between 7;0 and 9;11 produced prepositions in English and Spanish on a sentence repetition task from an experimental version of the Bilingual English Spanish Assessment Middle Extension (Peña, Bedore, Gutierrez-Clellen, Iglesias & Goldstein, in development). English input and output accounted for most of the variance in English preposition score. The importance of language-specific experiences in the development of prepositions is discussed. Competition for selection of appropriate prepositions in English and Spanish is discussed as potentially influencing low overall preposition scores in English and Spanish.

  4. Preposition accuracy on a sentence repetition task in school age Spanish–English bilinguals*

    PubMed Central

    TALIANCICH-KLINGER, CASEY L.; BEDORE, LISA M.; PEÑA, ELIZABETH D.

    2018-01-01

    Preposition knowledge is important for academic success. The goal of this project was to examine how different variables such as English input and output, Spanish preposition score, mother education level, and age of English exposure (AoEE) may have played a role in children’s preposition knowledge in English. 148 Spanish–English children between 7;0 and 9;11 produced prepositions in English and Spanish on a sentence repetition task from an experimental version of the Bilingual English Spanish Assessment Middle Extension (Peña, Bedore, Gutierrez-Clellen, Iglesias & Goldstein, in development). English input and output accounted for most of the variance in English preposition score. The importance of language-specific experiences in the development of prepositions is discussed. Competition for selection of appropriate prepositions in English and Spanish is discussed as potentially influencing low overall preposition scores in English and Spanish. PMID:28506324

  5. Implementation of a 3D Coupled Hydrodynamic and Contaminant Fate Model for PCDD/Fs in Thau Lagoon (France): The Importance of Atmospheric Sources of Contamination

    PubMed Central

    Dueri, Sibylle; Marinov, Dimitar; Fiandrino, Annie; Tronczyński, Jacek; Zaldívar, José-Manuel

    2010-01-01

    A 3D hydrodynamic and contaminant fate model was implemented for polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in Thau lagoon. The hydrodynamic model was tested against temperature and salinity measurements, while the contaminant fate model was assessed against available data collected at different stations inside the lagoon. The model results allow an assessment of the spatial and temporal variability of the distribution of contaminants in the lagoon, the seasonality of loads and the role of atmospheric deposition for the input of PCDD/Fs. The outcome suggests that air is an important source of PCDD/Fs for this ecosystem, therefore the monitoring of air pollution is very appropriate for assessing the inputs of these contaminants. These results call for the development of integrated environmental protection policies. PMID:20617040

  6. Body Fat Percentage Prediction Using Intelligent Hybrid Approaches

    PubMed Central

    Shao, Yuehjen E.

    2014-01-01

    Excess of body fat often leads to obesity. Obesity is typically associated with serious medical diseases, such as cancer, heart disease, and diabetes. Accordingly, knowing the body fat is an extremely important issue since it affects everyone's health. Although there are several ways to measure the body fat percentage (BFP), the accurate methods are often associated with hassle and/or high costs. Traditional single-stage approaches may use certain body measurements or explanatory variables to predict the BFP. Diverging from existing approaches, this study proposes new intelligent hybrid approaches to obtain fewer explanatory variables, and the proposed forecasting models are able to effectively predict the BFP. The proposed hybrid models consist of multiple regression (MR), artificial neural network (ANN), multivariate adaptive regression splines (MARS), and support vector regression (SVR) techniques. The first stage of the modeling includes the use of MR and MARS to obtain fewer but more important sets of explanatory variables. In the second stage, the remaining important variables are served as inputs for the other forecasting methods. A real dataset was used to demonstrate the development of the proposed hybrid models. The prediction results revealed that the proposed hybrid schemes outperformed the typical, single-stage forecasting models. PMID:24723804

  7. Fuzzy set methods for object recognition in space applications

    NASA Technical Reports Server (NTRS)

    Keller, James M.

    1991-01-01

    During the reporting period, the development of the theory and application of methodologies for decision making under uncertainty was addressed. Two subreports are included; the first on properties of general hybrid operators, while the second considers some new research on generalized threshold logic units. In the first part, the properties of the additive gamma-model, where the intersection part is first considered to be the product of the input values and the union part is obtained by an extension of De Morgan's law to fuzzy sets, is explored. Then the Yager's class of union and intersection is used in the additive gamma-model. The inputs are weighted to some power that represents their importance and thus their contribution to the compensation process. In the second part, the extension of binary logic synthesis methods to multiple valued logic synthesis methods to enable the synthesis of decision networks when the input/output variables are not binary is discussed.

  8. Climate science and famine early warning

    USGS Publications Warehouse

    Verdin, James P.; Funk, Chris; Senay, Gabriel B.; Choularton, R.

    2005-01-01

    Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.

  9. Climate science and famine early warning.

    PubMed

    Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard

    2005-11-29

    Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised.

  10. Climate science and famine early warning

    PubMed Central

    Verdin, James; Funk, Chris; Senay, Gabriel; Choularton, Richard

    2005-01-01

    Food security assessment in sub-Saharan Africa requires simultaneous consideration of multiple socio-economic and environmental variables. Early identification of populations at risk enables timely and appropriate action. Since large and widely dispersed populations depend on rainfed agriculture and pastoralism, climate monitoring and forecasting are important inputs to food security analysis. Satellite rainfall estimates (RFE) fill in gaps in station observations, and serve as input to drought index maps and crop water balance models. Gridded rainfall time-series give historical context, and provide a basis for quantitative interpretation of seasonal precipitation forecasts. RFE are also used to characterize flood hazards, in both simple indices and stream flow models. In the future, many African countries are likely to see negative impacts on subsistence agriculture due to the effects of global warming. Increased climate variability is forecast, with more frequent extreme events. Ethiopia requires special attention. Already facing a food security emergency, troubling persistent dryness has been observed in some areas, associated with a positive trend in Indian Ocean sea surface temperatures. Increased African capacity for rainfall observation, forecasting, data management and modelling applications is urgently needed. Managing climate change and increased climate variability require these fundamental technical capacities if creative coping strategies are to be devised. PMID:16433101

  11. Job satisfaction among mental healthcare professionals: The respective contributions of professional characteristics, team attributes, team processes, and team emergent states.

    PubMed

    Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie

    2017-01-01

    The aim of this study was to determine the respective contribution of professional characteristics, team attributes, team processes, and team emergent states on the job satisfaction of 315 mental health professionals from Quebec (Canada). Job satisfaction was measured with the Job Satisfaction Survey. Independent variables were organized into four categories according to a conceptual framework inspired from the Input-Mediator-Outcomes-Input Model. The contribution of each category of variables was assessed using hierarchical regression analysis. Variations in job satisfaction were mostly explained by team processes, with minimal contribution from the other three categories. Among the six variables significantly associated with job satisfaction in the final model, four were team processes: stronger team support, less team conflict, deeper involvement in the decision-making process, and more team collaboration. Job satisfaction was also associated with nursing and, marginally, male gender (professional characteristics) as well as with a stronger affective commitment toward the team (team emergent states). Results confirm the importance for health managers of offering adequate support to mental health professionals, and creating an environment favorable to collaboration and decision-sharing, and likely to reduce conflicts between team members.

  12. Modeling and Analysis of CNC Milling Process Parameters on Al3030 based Composite

    NASA Astrophysics Data System (ADS)

    Gupta, Anand; Soni, P. K.; Krishna, C. M.

    2018-04-01

    The machining of Al3030 based composites on Computer Numerical Control (CNC) high speed milling machine have assumed importance because of their wide application in aerospace industries, marine industries and automotive industries etc. Industries mainly focus on surface irregularities; material removal rate (MRR) and tool wear rate (TWR) which usually depends on input process parameters namely cutting speed, feed in mm/min, depth of cut and step over ratio. Many researchers have carried out researches in this area but very few have taken step over ratio or radial depth of cut also as one of the input variables. In this research work, the study of characteristics of Al3030 is carried out at high speed CNC milling machine over the speed range of 3000 to 5000 r.p.m. Step over ratio, depth of cut and feed rate are other input variables taken into consideration in this research work. A total nine experiments are conducted according to Taguchi L9 orthogonal array. The machining is carried out on high speed CNC milling machine using flat end mill of diameter 10mm. Flatness, MRR and TWR are taken as output parameters. Flatness has been measured using portable Coordinate Measuring Machine (CMM). Linear regression models have been developed using Minitab 18 software and result are validated by conducting selected additional set of experiments. Selection of input process parameters in order to get best machining outputs is the key contributions of this research work.

  13. Uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model at multiple flux tower sites

    USGS Publications Warehouse

    Chen, Mingshi; Senay, Gabriel B.; Singh, Ramesh K.; Verdin, James P.

    2016-01-01

    Evapotranspiration (ET) is an important component of the water cycle – ET from the land surface returns approximately 60% of the global precipitation back to the atmosphere. ET also plays an important role in energy transport among the biosphere, atmosphere, and hydrosphere. Current regional to global and daily to annual ET estimation relies mainly on surface energy balance (SEB) ET models or statistical and empirical methods driven by remote sensing data and various climatological databases. These models have uncertainties due to inevitable input errors, poorly defined parameters, and inadequate model structures. The eddy covariance measurements on water, energy, and carbon fluxes at the AmeriFlux tower sites provide an opportunity to assess the ET modeling uncertainties. In this study, we focused on uncertainty analysis of the Operational Simplified Surface Energy Balance (SSEBop) model for ET estimation at multiple AmeriFlux tower sites with diverse land cover characteristics and climatic conditions. The 8-day composite 1-km MODerate resolution Imaging Spectroradiometer (MODIS) land surface temperature (LST) was used as input land surface temperature for the SSEBop algorithms. The other input data were taken from the AmeriFlux database. Results of statistical analysis indicated that the SSEBop model performed well in estimating ET with an R2 of 0.86 between estimated ET and eddy covariance measurements at 42 AmeriFlux tower sites during 2001–2007. It was encouraging to see that the best performance was observed for croplands, where R2 was 0.92 with a root mean square error of 13 mm/month. The uncertainties or random errors from input variables and parameters of the SSEBop model led to monthly ET estimates with relative errors less than 20% across multiple flux tower sites distributed across different biomes. This uncertainty of the SSEBop model lies within the error range of other SEB models, suggesting systematic error or bias of the SSEBop model is within the normal range. This finding implies that the simplified parameterization of the SSEBop model did not significantly affect the accuracy of the ET estimate while increasing the ease of model setup for operational applications. The sensitivity analysis indicated that the SSEBop model is most sensitive to input variables, land surface temperature (LST) and reference ET (ETo); and parameters, differential temperature (dT), and maximum ET scalar (Kmax), particularly during the non-growing season and in dry areas. In summary, the uncertainty assessment verifies that the SSEBop model is a reliable and robust method for large-area ET estimation. The SSEBop model estimates can be further improved by reducing errors in two input variables (ETo and LST) and two key parameters (Kmax and dT).

  14. NLEdit: A generic graphical user interface for Fortran programs

    NASA Technical Reports Server (NTRS)

    Curlett, Brian P.

    1994-01-01

    NLEdit is a generic graphical user interface for the preprocessing of Fortran namelist input files. The interface consists of a menu system, a message window, a help system, and data entry forms. A form is generated for each namelist. The form has an input field for each namelist variable along with a one-line description of that variable. Detailed help information, default values, and minimum and maximum allowable values can all be displayed via menu picks. Inputs are processed through a scientific calculator program that allows complex equations to be used instead of simple numeric inputs. A custom user interface is generated simply by entering information about the namelist input variables into an ASCII file. There is no need to learn a new graphics system or programming language. NLEdit can be used as a stand-alone program or as part of a larger graphical user interface. Although NLEdit is intended for files using namelist format, it can be easily modified to handle other file formats.

  15. Sources of Uncertainty in the Prediction of LAI / fPAR from MODIS

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Ganapol, Barry D.; Brass, James A. (Technical Monitor)

    2002-01-01

    To explicate the sources of uncertainty in the prediction of biophysical variables over space, consider the general equation: where z is a variable with values on some nominal, ordinal, interval or ratio scale; y is a vector of input variables; u is the spatial support of y and z ; x and u are the spatial locations of y and z , respectively; f is a model and B is the vector of the parameters of this model. Any y or z has a value and a spatial extent which is called its support. Viewed in this way, categories of uncertainty are from variable (e.g. measurement), parameter, positional. support and model (e.g. structural) sources. The prediction of Leaf Area Index (LAI) and the fraction of absorbed photosynthetically active radiation (fPAR) are examples of z variables predicted using model(s) as a function of y variables and spatially constant parameters. The MOD15 algorithm is an example of f, called f(sub 1), with parameters including those defined by one of six biome types and solar and view angles. The Leaf Canopy Model (LCM)2, a nested model that combines leaf radiative transfer with a full canopy reflectance model through the phase function, is a simpler though similar radiative transfer approach to f(sub 1). In a previous study, MOD15 and LCM2 gave similar results for the broadleaf forest biome. Differences between these two models can be used to consider the structural uncertainty in prediction results. In an effort to quantify each of the five sources of uncertainty and rank their relative importance for the LAI/fPAR prediction problem, we used recent data for an EOS Core Validation Site in the broadleaf biome with coincident surface reflectance, vegetation index, fPAR and LAI products from the Moderate Resolution Imaging Spectrometer (MODIS). Uncertainty due to support on the input reflectance variable was characterized using Landsat ETM+ data. Input uncertainties were propagated through the LCM2 model and compared with published uncertainties from the MOD15 algorithm.

  16. Computing Shapes Of Cascade Diffuser Blades

    NASA Technical Reports Server (NTRS)

    Tran, Ken; Prueger, George H.

    1993-01-01

    Computer program generates sizes and shapes of cascade-type blades for use in axial or radial turbomachine diffusers. Generates shapes of blades rapidly, incorporating extensive cascade data to determine optimum incidence and deviation angle for blade design based on 65-series data base of National Advisory Commission for Aeronautics and Astronautics (NACA). Allows great variability in blade profile through input variables. Also provides for design of three-dimensional blades by allowing variable blade stacking. Enables designer to obtain computed blade-geometry data in various forms: as input for blade-loading analysis; as input for quasi-three-dimensional analysis of flow; or as points for transfer to computer-aided design.

  17. Development and evaluation of height diameter at breast models for native Chinese Metasequoia.

    PubMed

    Liu, Mu; Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-Ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50-485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia.

  18. Development and evaluation of height diameter at breast models for native Chinese Metasequoia

    PubMed Central

    Feng, Zhongke; Zhang, Zhixiang; Ma, Chenghui; Wang, Mingming; Lian, Bo-ling; Sun, Renjie; Zhang, Li

    2017-01-01

    Accurate tree height and diameter at breast height (dbh) are important input variables for growth and yield models. A total of 5503 Chinese Metasequoia trees were used in this study. We studied 53 fitted models, of which 7 were linear models and 46 were non-linear models. These models were divided into two groups of single models and multivariate models according to the number of independent variables. The results show that the allometry equation of tree height which has diameter at breast height as independent variable can better reflect the change of tree height; in addition the prediction accuracy of the multivariate composite models is higher than that of the single variable models. Although tree age is not the most important variable in the study of the relationship between tree height and dbh, the consideration of tree age when choosing models and parameters in model selection can make the prediction of tree height more accurate. The amount of data is also an important parameter what can improve the reliability of models. Other variables such as tree height, main dbh and altitude, etc can also affect models. In this study, the method of developing the recommended models for predicting the tree height of native Metasequoias aged 50–485 years is statistically reliable and can be used for reference in predicting the growth and production of mature native Metasequoia. PMID:28817600

  19. Global sensitivity analysis in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Tsvetkova, O.; Ouarda, T. B.

    2012-12-01

    Wind energy is one of the most promising renewable energy sources. Nevertheless, it is not yet a common source of energy, although there is enough wind potential to supply world's energy demand. One of the most prominent obstacles on the way of employing wind energy is the uncertainty associated with wind energy assessment. Global sensitivity analysis (SA) studies how the variation of input parameters in an abstract model effects the variation of the variable of interest or the output variable. It also provides ways to calculate explicit measures of importance of input variables (first order and total effect sensitivity indices) in regard to influence on the variation of the output variable. Two methods of determining the above mentioned indices were applied and compared: the brute force method and the best practice estimation procedure In this study a methodology for conducting global SA of wind energy assessment at a planning stage is proposed. Three sampling strategies which are a part of SA procedure were compared: sampling based on Sobol' sequences (SBSS), Latin hypercube sampling (LHS) and pseudo-random sampling (PRS). A case study of Masdar City, a showcase of sustainable living in the UAE, is used to exemplify application of the proposed methodology. Sources of uncertainty in wind energy assessment are very diverse. In the case study the following were identified as uncertain input parameters: the Weibull shape parameter, the Weibull scale parameter, availability of a wind turbine, lifetime of a turbine, air density, electrical losses, blade losses, ineffective time losses. Ineffective time losses are defined as losses during the time when the actual wind speed is lower than the cut-in speed or higher than the cut-out speed. The output variable in the case study is the lifetime energy production. Most influential factors for lifetime energy production are identified with the ranking of the total effect sensitivity indices. The results of the present research show that the brute force method is best for wind assessment purpose, SBSS outperforms other sampling strategies in the majority of cases. The results indicate that the Weibull scale parameter, turbine lifetime and Weibull shape parameter are the three most influential variables in the case study setting. The following conclusions can be drawn from these results: 1) SBSS should be recommended for use in Monte Carlo experiments, 2) The brute force method should be recommended for conducting sensitivity analysis in wind resource assessment, and 3) Little variation in the Weibull scale causes significant variation in energy production. The presence of the two distribution parameters in the top three influential variables (the Weibull shape and scale) emphasizes the importance of accuracy of (a) choosing the distribution to model wind regime at a site and (b) estimating probability distribution parameters. This can be labeled as the most important conclusion of this research because it opens a field for further research, which the authors see could change the wind energy field tremendously.

  20. Topographical effects of climate dataset and their impacts on the estimation of regional net primary productivity

    NASA Astrophysics Data System (ADS)

    Sun, L. Qing; Feng, Feng X.

    2014-11-01

    In this study, we first built and compared two different climate datasets for Wuling mountainous area in 2010, one of which considered topographical effects during the ANUSPLIN interpolation was referred as terrain-based climate dataset, while the other one did not was called ordinary climate dataset. Then, we quantified the topographical effects of climatic inputs on NPP estimation by inputting two different climate datasets to the same ecosystem model, the Boreal Ecosystem Productivity Simulator (BEPS), to evaluate the importance of considering relief when estimating NPP. Finally, we found the primary contributing variables to the topographical effects through a series of experiments given an overall accuracy of the model output for NPP. The results showed that: (1) The terrain-based climate dataset presented more reliable topographic information and had closer agreements with the station dataset than the ordinary climate dataset at successive time series of 365 days in terms of the daily mean values. (2) On average, ordinary climate dataset underestimated NPP by 12.5% compared with terrain-based climate dataset over the whole study area. (3) The primary climate variables contributing to the topographical effects of climatic inputs for Wuling mountainous area were temperatures, which suggest that it is necessary to correct temperature differences for estimating NPP accurately in such a complex terrain.

  1. Utility of Satellite Remote Sensing for Land-Atmosphere Coupling and Drought Metrics

    NASA Technical Reports Server (NTRS)

    Roundy, Joshua K.; Santanello, Joseph A.

    2017-01-01

    Feedbacks between the land and the atmosphere can play an important role in the water cycle and a number of studies have quantified Land-Atmosphere (L-A) interactions and feedbacks through observations and prediction models. Due to the complex nature of L-A interactions, the observed variables are not always available at the needed temporal and spatial scales. This work derives the Coupling Drought Index (CDI) solely from satellite data and evaluates the input variables and the resultant CDI against in-situ data and reanalysis products. NASA's AQUA satellite and retrievals of soil moisture and lower tropospheric temperature and humidity properties are used as input. Overall, the AQUA-based CDI and its inputs perform well at a point, spatially, and in time (trends) compared to in-situ and reanalysis products. In addition, this work represents the first time that in-situ observations were utilized for the coupling classification and CDI. The combination of in-situ and satellite remote sensing CDI is unique and provides an observational tool for evaluating models at local and large scales. Overall, results indicate that there is sufficient information in the signal from simultaneous measurements of the land and atmosphere from satellite remote sensing to provide useful information for applications of drought monitoring and coupling metrics.

  2. Utility of Satellite Remote Sensing for Land-Atmosphere Coupling and Drought Metrics

    PubMed Central

    Roundy, Joshua K.; Santanello, Joseph A.

    2018-01-01

    Feedbacks between the land and the atmosphere can play an important role in the water cycle and a number of studies have quantified Land-Atmosphere (L-A) interactions and feedbacks through observations and prediction models. Due to the complex nature of L-A interactions, the observed variables are not always available at the needed temporal and spatial scales. This work derives the Coupling Drought Index (CDI) solely from satellite data and evaluates the input variables and the resultant CDI against in-situ data and reanalysis products. NASA’s AQUA satellite and retrievals of soil moisture and lower tropospheric temperature and humidity properties are used as input. Overall, the AQUA-based CDI and its inputs perform well at a point, spatially, and in time (trends) compared to in-situ and reanalysis products. In addition, this work represents the first time that in-situ observations were utilized for the coupling classification and CDI. The combination of in-situ and satellite remote sensing CDI is unique and provides an observational tool for evaluating models at local and large scales. Overall, results indicate that there is sufficient information in the signal from simultaneous measurements of the land and atmosphere from satellite remote sensing to provide useful information for applications of drought monitoring and coupling metrics. PMID:29645012

  3. Neural Network Machine Learning and Dimension Reduction for Data Visualization

    NASA Technical Reports Server (NTRS)

    Liles, Charles A.

    2014-01-01

    Neural network machine learning in computer science is a continuously developing field of study. Although neural network models have been developed which can accurately predict a numeric value or nominal classification, a general purpose method for constructing neural network architecture has yet to be developed. Computer scientists are often forced to rely on a trial-and-error process of developing and improving accurate neural network models. In many cases, models are constructed from a large number of input parameters. Understanding which input parameters have the greatest impact on the prediction of the model is often difficult to surmise, especially when the number of input variables is very high. This challenge is often labeled the "curse of dimensionality" in scientific fields. However, techniques exist for reducing the dimensionality of problems to just two dimensions. Once a problem's dimensions have been mapped to two dimensions, it can be easily plotted and understood by humans. The ability to visualize a multi-dimensional dataset can provide a means of identifying which input variables have the highest effect on determining a nominal or numeric output. Identifying these variables can provide a better means of training neural network models; models can be more easily and quickly trained using only input variables which appear to affect the outcome variable. The purpose of this project is to explore varying means of training neural networks and to utilize dimensional reduction for visualizing and understanding complex datasets.

  4. A users' manual for MCPRAM (Monte Carlo PReprocessor for AMEER) and for the fuze options in AMEER (Aero Mechanical Equation Evaluation Routines)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaFarge, R.A.

    1990-05-01

    MCPRAM (Monte Carlo PReprocessor for AMEER), a computer program that uses Monte Carlo techniques to create an input file for the AMEER trajectory code, has been developed for the Sandia National Laboratories VAX and Cray computers. Users can select the number of trajectories to compute, which AMEER variables to investigate, and the type of probability distribution for each variable. Any legal AMEER input variable can be investigated anywhere in the input run stream with either a normal, uniform, or Rayleigh distribution. Users also have the option to use covariance matrices for the investigation of certain correlated variables such as boostermore » pre-reentry errors and wind, axial force, and atmospheric models. In conjunction with MCPRAM, AMEER was modified to include the variables introduced by the covariance matrices and to include provisions for six types of fuze models. The new fuze models and the new AMEER variables are described in this report.« less

  5. A linear and non-linear polynomial neural network modeling of dissolved oxygen content in surface water: Inter- and extrapolation performance with inputs' significance analysis.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-01-01

    Accurate prediction of water quality parameters (WQPs) is an important task in the management of water resources. Artificial neural networks (ANNs) are frequently applied for dissolved oxygen (DO) prediction, but often only their interpolation performance is checked. The aims of this research, beside interpolation, were the determination of extrapolation performance of ANN model, which was developed for the prediction of DO content in the Danube River, and the assessment of relationship between the significance of inputs and prediction error in the presence of values which were of out of the range of training. The applied ANN is a polynomial neural network (PNN) which performs embedded selection of most important inputs during learning, and provides a model in the form of linear and non-linear polynomial functions, which can then be used for a detailed analysis of the significance of inputs. Available dataset that contained 1912 monitoring records for 17 water quality parameters was split into a "regular" subset that contains normally distributed and low variability data, and an "extreme" subset that contains monitoring records with outlier values. The results revealed that the non-linear PNN model has good interpolation performance (R 2 =0.82), but it was not robust in extrapolation (R 2 =0.63). The analysis of extrapolation results has shown that the prediction errors are correlated with the significance of inputs. Namely, the out-of-training range values of the inputs with low importance do not affect significantly the PNN model performance, but their influence can be biased by the presence of multi-outlier monitoring records. Subsequently, linear PNN models were successfully applied to study the effect of water quality parameters on DO content. It was observed that DO level is mostly affected by temperature, pH, biological oxygen demand (BOD) and phosphorus concentration, while in extreme conditions the importance of alkalinity and bicarbonates rises over pH and BOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Input Variability Facilitates Unguided Subcategory Learning in Adults

    ERIC Educational Resources Information Center

    Eidsvåg, Sunniva Sørhus; Austad, Margit; Plante, Elena; Asbjørnsen, Arve E.

    2015-01-01

    Purpose: This experiment investigated whether input variability would affect initial learning of noun gender subcategories in an unfamiliar, natural language (Russian), as it is known to assist learning of other grammatical forms. Method: Forty adults (20 men, 20 women) were familiarized with examples of masculine and feminine Russian words. Half…

  7. Parameter uncertainty and variability in evaluative fate and exposure models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hertwich, E.G.; McKone, T.E.; Pease, W.S.

    The human toxicity potential, a weighting scheme used to evaluate toxic emissions for life cycle assessment and toxics release inventories, is based on potential dose calculations and toxicity factors. This paper evaluates the variance in potential dose calculations that can be attributed to the uncertainty in chemical-specific input parameters as well as the variability in exposure factors and landscape parameters. A knowledge of the uncertainty allows us to assess the robustness of a decision based on the toxicity potential; a knowledge of the sources of uncertainty allows one to focus resources if the uncertainty is to be reduced. The potentialmore » does of 236 chemicals was assessed. The chemicals were grouped by dominant exposure route, and a Monte Carlo analysis was conducted for one representative chemical in each group. The variance is typically one to two orders of magnitude. For comparison, the point estimates in potential dose for 236 chemicals span ten orders of magnitude. Most of the variance in the potential dose is due to chemical-specific input parameters, especially half-lives, although exposure factors such as fish intake and the source of drinking water can be important for chemicals whose dominant exposure is through indirect routes. Landscape characteristics are generally of minor importance.« less

  8. The influence of simulator input conditions on the wear of total knee replacements: An experimental and computational study

    PubMed Central

    Brockett, Claire L; Abdelgaied, Abdellatif; Haythornthwaite, Tony; Hardaker, Catherine; Fisher, John; Jennings, Louise M

    2016-01-01

    Advancements in knee replacement design, material and sterilisation processes have provided improved clinical results. However, surface wear of the polyethylene leading to osteolysis is still considered the longer-term risk factor. Experimental wear simulation is an established method for evaluating the wear performance of total joint replacements. The aim of this study was to investigate the influence of simulation input conditions, specifically input kinematic magnitudes, waveforms and directions of motion and position of the femoral centre of rotation, on the wear performance of a fixed-bearing total knee replacement through a combined experimental and computational approach. Studies were completed using conventional and moderately cross-linked polyethylene to determine whether the influence of these simulation input conditions varied with material. The position of the femoral centre of rotation and the input kinematics were shown to have a significant influence on the wear rates. Similar trends were shown for both the conventional and moderately cross-linked polyethylene materials, although lower wear rates were found for the moderately cross-linked polyethylene due to the higher level of cross-linking. The most important factor influencing the wear was the position of the relative contact point at the femoral component and tibial insert interface. This was dependent on the combination of input displacement magnitudes, waveforms, direction of motion and femoral centre of rotation. This study provides further evidence that in order to study variables such as design and material in total knee replacement, it is important to carefully control knee simulation conditions. This can be more effectively achieved through the use of displacement control simulation. PMID:27160561

  9. Study of a control strategy for grid side converter in doubly- fed wind power system

    NASA Astrophysics Data System (ADS)

    Zhu, D. J.; Tan, Z. L.; Yuan, F.; Wang, Q. Y.; Ding, M.

    2016-08-01

    The grid side converter is an important part of the excitation system of doubly-fed asynchronous generator used in wind power system. As a three-phase voltage source PWM converter, it can not only transfer slip power in the form of active power, but also adjust the reactive power of the grid. This paper proposed a control approach for improving its performance. In this control approach, the dc voltage is regulated by a sliding mode variable structure control scheme and current by a variable structure controller based on the input output linearization. The theoretical bases of the sliding mode variable structure control were introduced, and the stability proof was presented. Switching function of the system has been deduced, sliding mode voltage controller model has been established, and the output of the outer voltage loop is the instruction of the inner current loop. Affine nonlinear model of two input two output equations on d-q axis for current has been established its meeting conditions of exact linearization were proved. In order to improve the anti-jamming capability of the system, a variable structure control was added in the current controller, the control law was deduced. The dual-loop control with sliding mode control in outer voltage loop and linearization variable structure control in inner current loop was proposed. Simulation results demonstrate the effectiveness of the proposed control strategy even during the dc reference voltage and system load variation.

  10. The importance of mineralogical input into geometallurgy programs

    USGS Publications Warehouse

    Hoal, K. Olson; Woodhead, J.D.; Smith, Kathleen S.

    2013-01-01

    Mineralogy is the link between ore formation and ore extraction. It is the most fundamental component of geomet programs, and the most important aspect of a life-of-project approach to mineral resource projects. Understanding orebodies is achieved by understanding the mineralogy and texture of the materials, throughout the process, because minerals hold the information required to unlock the value they contain. Geomet mineralogy programs absolutely require the appropriate expertise and at least three steps of mineral characterisation prior to using semi-automated or other methods: field examination, thorough core logging, and optical microscopy. Economic geological inputs for orebody characterisation are necessary for orebody understanding, and are exemplified by current research in the Zambian Copperbelt, where revised sequence stratigraphy and understanding of alteration, metasomatism and metamorphism can be used to predict topical issues at mine sites. Environmental inputs for sustainability characterisation are demonstrated by recent work on tailings from the Leadville, Colorado, USA area, including linking mineralogy to water quality issues. Risk assessments need to take into account the technical uncertainties around geological variability and mineral extractability, and mineralogy is the only metric that can be used to make this risk contribution.

  11. Statistics of optimal information flow in ensembles of regulatory motifs

    NASA Astrophysics Data System (ADS)

    Crisanti, Andrea; De Martino, Andrea; Fiorentino, Jonathan

    2018-02-01

    Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Since the mid-2000s, such optima have been well characterized in several biologically relevant cases. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the "capacity") achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N , (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters.

  12. Missing pulse detector for a variable frequency source

    DOEpatents

    Ingram, Charles B.; Lawhorn, John H.

    1979-01-01

    A missing pulse detector is provided which has the capability of monitoring a varying frequency pulse source to detect the loss of a single pulse or total loss of signal from the source. A frequency-to-current converter is used to program the output pulse width of a variable period retriggerable one-shot to maintain a pulse width slightly longer than one-half the present monitored pulse period. The retriggerable one-shot is triggered at twice the input pulse rate by employing a frequency doubler circuit connected between the one-shot input and the variable frequency source being monitored. The one-shot remains in the triggered or unstable state under normal conditions even though the source period is varying. A loss of an input pulse or single period of a fluctuating signal input will cause the one-shot to revert to its stable state, changing the output signal level to indicate a missing pulse or signal.

  13. Input and language development in bilingually developing children.

    PubMed

    Hoff, Erika; Core, Cynthia

    2013-11-01

    Language skills in young bilingual children are highly varied as a result of the variability in their language experiences, making it difficult for speech-language pathologists to differentiate language disorder from language difference in bilingual children. Understanding the sources of variability in bilingual contexts and the resulting variability in children's skills will help improve language assessment practices by speech-language pathologists. In this article, we review literature on bilingual first language development for children under 5 years of age. We describe the rate of development in single and total language growth, we describe effects of quantity of input and quality of input on growth, and we describe effects of family composition on language input and language growth in bilingual children. We provide recommendations for language assessment of young bilingual children and consider implications for optimizing children's dual language development. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  14. Effect of plasma arc welding variables on fusion zone grain size and hardness of AISI 321 austenitic stainless steel

    NASA Astrophysics Data System (ADS)

    Kondapalli, S. P.

    2017-12-01

    In the present work, pulsed current microplasma arc welding is carried out on AISI 321 austenitic stainless steel of 0.3 mm thickness. Peak current, Base current, Pulse rate and Pulse width are chosen as the input variables, whereas grain size and hardness are considered as output responses. Response surface method is adopted by using Box-Behnken Design, and in total 27 experiments are performed. Empirical relation between input and output response is developed using statistical software and analysis of variance (ANOVA) at 95% confidence level to check the adequacy. The main effect and interaction effect of input variables on output response are also studied.

  15. Simulation of the Predictive Control Algorithm for Container Crane Operation using Matlab Fuzzy Logic Tool Box

    NASA Technical Reports Server (NTRS)

    Richardson, Albert O.

    1997-01-01

    This research has investigated the use of fuzzy logic, via the Matlab Fuzzy Logic Tool Box, to design optimized controller systems. The engineering system for which the controller was designed and simulate was the container crane. The fuzzy logic algorithm that was investigated was the 'predictive control' algorithm. The plant dynamics of the container crane is representative of many important systems including robotic arm movements. The container crane that was investigated had a trolley motor and hoist motor. Total distance to be traveled by the trolley was 15 meters. The obstruction height was 5 meters. Crane height was 17.8 meters. Trolley mass was 7500 kilograms. Load mass was 6450 kilograms. Maximum trolley and rope velocities were 1.25 meters per sec. and 0.3 meters per sec., respectively. The fuzzy logic approach allowed the inclusion, in the controller model, of performance indices that are more effectively defined in linguistic terms. These include 'safety' and 'cargo swaying'. Two fuzzy inference systems were implemented using the Matlab simulation package, namely the Mamdani system (which relates fuzzy input variables to fuzzy output variables), and the Sugeno system (which relates fuzzy input variables to crisp output variable). It is found that the Sugeno FIS is better suited to including aspects of those plant dynamics whose mathematical relationships can be determined.

  16. Community models for wildlife impact assessment: a review of concepts and approaches

    USGS Publications Warehouse

    Schroeder, Richard L.

    1987-01-01

    The first two sections of this paper are concerned with defining and bounding communities, and describing those attributes of the community that are quantifiable and suitable for wildlife impact assessment purposes. Prior to the development or use of a community model, it is important to have a clear understanding of the concept of a community and a knowledge of the types of community attributes that can serve as outputs for the development of models. Clearly defined, unambiguous model outputs are essential for three reasons: (1) to ensure that the measured community attributes relate to the wildlife resource objectives of the study; (2) to allow testing of the outputs in experimental studies, to determine accuracy, and to allow for improvements based on such testing; and (3) to enable others to clearly understand the community attribute that has been measured. The third section of this paper described input variables that may be used to predict various community attributes. These input variables do not include direct measures of wildlife populations. Most impact assessments involve projects that result in drastic changes in habitat, such as changes in land use, vegetation, or available area. Therefore, the model input variables described in this section deal primarily with habitat related features. Several existing community models are described in the fourth section of this paper. A general description of each model is provided, including the nature of the input variables and the model output. The logic and assumptions of each model are discussed, along with data requirements needed to use the model. The fifth section provides guidance on the selection and development of community models. Identification of the community attribute that is of concern will determine the type of model most suitable for a particular application. This section provides guidelines on selected an existing model, as well as a discussion of the major steps to be followed in modifying an existing model or developing a new model. Considerations associated with the use of community models with the Habitat Evaluation Procedures are also discussed. The final section of the paper summarizes major findings of interest to field biologists and provides recommendations concerning the implementation of selected concepts in wildlife community analyses.

  17. Consequences of acute and long-term removal of neuromodulatory input on the episodic gastric rhythm of the crab Cancer borealis

    PubMed Central

    Marder, Eve

    2015-01-01

    For decades, the episodic gastric rhythm of the crustacean stomatogastric nervous system (STNS) has served as an important model system for understanding the generation of rhythmic motor behaviors. Here we quantitatively describe many features of the gastric rhythm of the crab Cancer borealis under several conditions. First, we analyzed spontaneous gastric rhythms produced by freshly dissected preparations of the STNS, including the cycle frequency and phase relationships among gastric units. We find that phase is relatively conserved across frequency, similar to the pyloric rhythm. We also describe relationships between these two rhythms, including a significant gastric/pyloric frequency correlation. We then performed continuous, days-long extracellular recordings of gastric activity from preparations of the STNS in which neuromodulatory inputs to the stomatogastric ganglion were left intact and also from preparations in which these modulatory inputs were cut (decentralization). This allowed us to provide quantitative descriptions of variability and phase conservation within preparations across time. For intact preparations, gastric activity was more variable than pyloric activity but remained relatively stable across 4–6 days, and many significant correlations were found between phase and frequency within animals. Decentralized preparations displayed fewer episodes of gastric activity, with altered phase relationships, lower frequencies, and reduced coordination both among gastric units and between the gastric and pyloric rhythms. Together, these results provide insight into the role of neuromodulation in episodic pattern generation and the extent of animal-to-animal variability in features of spontaneously occurring gastric rhythms. PMID:26156388

  18. Effects of Varying Cloud Cover on Springtime Runoff in California's Sierra Nevada

    NASA Astrophysics Data System (ADS)

    Sumargo, E.; Cayan, D. R.

    2017-12-01

    This study investigates how cloud cover modifies snowmelt-runoff processes in Sierra Nevada watersheds during dry and wet periods. We use two of the California Department of Water Resources' (DWR's) quasi-operational models of the Tuolumne and Merced River basins developed from the USGS Precipitation-Runoff Modeling System (PRMS) hydrologic modeling system. Model simulations are conducted after a validated optimization of model performance in simulating recent (1996-2014) historical variability in the Tuolumne and Merced basins using solar radiation (Qsi) derived from Geostationary Operational Environmental Satellite (GOES) remote sensing. Specifically, the questions we address are: 1) how sensitive are snowmelt and runoff in the Tuolumne and Merced River basins to Qsi variability associated with cloud cover variations?, and 2) does this sensitivity change in dry vs. wet years? To address these question, we conduct two experiments, where: E1) theoretical clear-sky Qsi is used as an input to PRMS, and E2) the annual harmonic cycle of Qsi is used as an input to PRMS. The resulting hydrographs from these experiments exhibit changes in peak streamflow timing by several days to a few weeks and smaller streamflow variability when compared to the actual flows and the original simulations. For E1, despite some variations, this pattern persists when the result is evaluated for dry-year and wet-year subsets, reflecting the consistently higher Qsi input available. For E2, the hydrograph shows a later spring-summer streamflow peak in the dry-year subset when compared to the original simulations, indicating the relative importance of the modulating effect of cloud cover on snowmelt-runoff in drier years.

  19. Quantification of 11C-Laniquidar Kinetics in the Brain.

    PubMed

    Froklage, Femke E; Boellaard, Ronald; Bakker, Esther; Hendrikse, N Harry; Reijneveld, Jaap C; Schuit, Robert C; Windhorst, Albert D; Schober, Patrick; van Berckel, Bart N M; Lammertsma, Adriaan A; Postnov, Andrey

    2015-11-01

    Overexpression of the multidrug efflux transport P-glycoprotein may play an important role in pharmacoresistance. (11)C-laniquidar is a newly developed tracer of P-glycoprotein expression. The aim of this study was to develop a pharmacokinetic model for quantification of (11)C-laniquidar uptake and to assess its test-retest variability. Two (test-retest) dynamic (11)C-laniquidar PET scans were obtained in 8 healthy subjects. Plasma input functions were obtained using online arterial blood sampling with metabolite corrections derived from manual samples. Coregistered T1 MR images were used for region-of-interest definition. Time-activity curves were analyzed using various plasma input compartmental models. (11)C-laniquidar was metabolized rapidly, with a parent plasma fraction of 50% at 10 min after tracer injection. In addition, the first-pass extraction of (11)C-laniquidar was low. (11)C-laniquidar time-activity curves were best fitted to an irreversible single-tissue compartment (1T1K) model using conventional models. Nevertheless, significantly better fits were obtained using 2 parallel single-tissue compartments, one for parent tracer and the other for labeled metabolites (dual-input model). Robust K1 results were also obtained by fitting the first 5 min of PET data to the 1T1K model, at least when 60-min plasma input data were used. For both models, the test-retest variability of (11)C-laniquidar rate constant for transfer from arterial plasma to tissue (K1) was approximately 19%. The accurate quantification of (11)C-laniquidar kinetics in the brain is hampered by its fast metabolism and the likelihood that labeled metabolites enter the brain. Best fits for the entire 60 min of data were obtained using a dual-input model, accounting for uptake of (11)C-laniquidar and its labeled metabolites. Alternatively, K1 could be obtained from a 5-min scan using a standard 1T1K model. In both cases, the test-retest variability of K1 was approximately 19%. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  20. Machine learning for toxicity characterization of organic chemical emissions using USEtox database: Learning the structure of the input space.

    PubMed

    Marvuglia, Antonino; Kanevski, Mikhail; Benetto, Enrico

    2015-10-01

    Toxicity characterization of chemical emissions in Life Cycle Assessment (LCA) is a complex task which usually proceeds via multimedia (fate, exposure and effect) models attached to models of dose-response relationships to assess the effects on target. Different models and approaches do exist, but all require a vast amount of data on the properties of the chemical compounds being assessed, which are hard to collect or hardly publicly available (especially for thousands of less common or newly developed chemicals), therefore hampering in practice the assessment in LCA. An example is USEtox, a consensual model for the characterization of human toxicity and freshwater ecotoxicity. This paper places itself in a line of research aiming at providing a methodology to reduce the number of input parameters necessary to run multimedia fate models, focusing in particular to the application of the USEtox toxicity model. By focusing on USEtox, in this paper two main goals are pursued: 1) performing an extensive exploratory analysis (using dimensionality reduction techniques) of the input space constituted by the substance-specific properties at the aim of detecting particular patterns in the data manifold and estimating the dimension of the subspace in which the data manifold actually lies; and 2) exploring the application of a set of linear models, based on partial least squares (PLS) regression, as well as a nonlinear model (general regression neural network--GRNN) in the seek for an automatic selection strategy of the most informative variables according to the modelled output (USEtox factor). After extensive analysis, the intrinsic dimension of the input manifold has been identified between three and four. The variables selected as most informative may vary according to the output modelled and the model used, but for the toxicity factors modelled in this paper the input variables selected as most informative are coherent with prior expectations based on scientific knowledge of toxicity factors modelling. Thus the outcomes of the analysis are promising for the future application of the approach to other portions of the model, affected by important data gaps, e.g., to the calculation of human health effect factors. Copyright © 2015. Published by Elsevier Ltd.

  1. Ventricular repolarization variability for hypoglycemia detection.

    PubMed

    Ling, Steve; Nguyen, H T

    2011-01-01

    Hypoglycemia is the most acute and common complication of Type 1 diabetes and is a limiting factor in a glycemic management of diabetes. In this paper, two main contributions are presented; firstly, ventricular repolarization variabilities are introduced for hypoglycemia detection, and secondly, a swarm-based support vector machine (SVM) algorithm with the inputs of the repolarization variabilities is developed to detect hypoglycemia. By using the algorithm and including several repolarization variabilities as inputs, the best hypoglycemia detection performance is found with sensitivity and specificity of 82.14% and 60.19%, respectively.

  2. Throughfall under a teak plantation in Thailand: a multifactorial analysis on the effects of canopy phenology and meteorological conditions

    NASA Astrophysics Data System (ADS)

    Tanaka, N.; Levia, D. F., Jr.; Igarashi, Y.; Nanko, K.; Yoshifuji, N.; Tanaka, K.; Chatchai, T.; Suzuki, M.; Kumagai, T.

    2014-12-01

    Teak (Tectona grandis Linn. f.) plantations cover vast areas throughout Southeast Asia and are of great economic importance. This study has sought to increase our understanding of throughfall inputs under teak by analyzing the abiotic and biotic factors governing throughfall amounts and throughfall ratios in relation to three canopy phenophases (leafless, leafing, and leafed). There is no rain during the brief leaf senescence phenophase. Daily data was available for both throughfall volumes and depths as well as leaf area index. Detailed meteorological data were available in situ every ten minutes. Leveraging this high-resolution field data, we employed boosted regression trees (BRT) analysis to identify the primary controls on throughfall amount and ratio during each of the three canopy phenophases. Whereas throughfall amounts were always dominated by the magnitude of rainfall (as expected), throughfall ratios were governed by a suite of predictor variables during each phenophase. The BRT analysis demonstrated that throughfall ratio in the leafless phase was most influenced (in descending order of importance) by air temperature, rainfall amount, maximum wind speed, and rainfall intensity. Throughfall ratio in the leafed phenophase was dominated by rainfall amount which exerted 54.0% of the relative influence. The leafing phenophase was an intermediate case where rainfall amount, air temperature, and vapor pressure deficit were most important. Our results highlight the fact that throughfall ratios are differentially influenced by a suite of meteorological variables during leafless, leafing, and leafed phenophases. Abiotic variables (rainfall amount, air temperature, vapor pressure deficit, and maximum wind speed) trumped leaf area index and stand density in their effect on throughfall ratio. The leafing phenophase, while transitional in nature and short in duration, has a detectable and unique impact on water inputs to teak plantations. Further work is clearly needed to better gauge the importance of the leaf emergence period to the stemflow hydrology and forest biogeochemistry of teak plantations.

  3. The Role of Learner and Input Variables in Learning Inflectional Morphology

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Kempe, Vera; Sionov, Ariel

    2006-01-01

    To examine effects of input and learner characteristics on morphology acquisition, 60 adult English speakers learned to inflect masculine and feminine Russian nouns in nominative, dative, and genitive cases. By varying training vocabulary size (i.e., type variability), holding constant the number of learning trials, we tested whether learners…

  4. Wideband low-noise variable-gain BiCMOS transimpedance amplifier

    NASA Astrophysics Data System (ADS)

    Meyer, Robert G.; Mack, William D.

    1994-06-01

    A new monolithic variable gain transimpedance amplifier is described. The circuit is realized in BiCMOS technology and has measured gain of 98 kilo ohms, bandwidth of 128 MHz, input noise current spectral density of 1.17 pA/square root of Hz and input signal-current handling capability of 3 mA.

  5. Integrated controls design optimization

    DOEpatents

    Lou, Xinsheng; Neuschaefer, Carl H.

    2015-09-01

    A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.

  6. Application of kernel principal component analysis and computational machine learning to exploration of metabolites strongly associated with diet.

    PubMed

    Shiokawa, Yuka; Date, Yasuhiro; Kikuchi, Jun

    2018-02-21

    Computer-based technological innovation provides advancements in sophisticated and diverse analytical instruments, enabling massive amounts of data collection with relative ease. This is accompanied by a fast-growing demand for technological progress in data mining methods for analysis of big data derived from chemical and biological systems. From this perspective, use of a general "linear" multivariate analysis alone limits interpretations due to "non-linear" variations in metabolic data from living organisms. Here we describe a kernel principal component analysis (KPCA)-incorporated analytical approach for extracting useful information from metabolic profiling data. To overcome the limitation of important variable (metabolite) determinations, we incorporated a random forest conditional variable importance measure into our KPCA-based analytical approach to demonstrate the relative importance of metabolites. Using a market basket analysis, hippurate, the most important variable detected in the importance measure, was associated with high levels of some vitamins and minerals present in foods eaten the previous day, suggesting a relationship between increased hippurate and intake of a wide variety of vegetables and fruits. Therefore, the KPCA-incorporated analytical approach described herein enabled us to capture input-output responses, and should be useful not only for metabolic profiling but also for profiling in other areas of biological and environmental systems.

  7. ProMC: Input-output data format for HEP applications using varint encoding

    NASA Astrophysics Data System (ADS)

    Chekanov, S. V.; May, E.; Strand, K.; Van Gemmeren, P.

    2014-10-01

    A new data format for Monte Carlo (MC) events, or any structural data, including experimental data, is discussed. The format is designed to store data in a compact binary form using variable-size integer encoding as implemented in the Google's Protocol Buffers package. This approach is implemented in the PROMC library which produces smaller file sizes for MC records compared to the existing input-output libraries used in high-energy physics (HEP). Other important features of the proposed format are a separation of abstract data layouts from concrete programming implementations, self-description and random access. Data stored in PROMC files can be written, read and manipulated in a number of programming languages, such C++, JAVA, FORTRAN and PYTHON.

  8. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  9. Influence of the UV Environment on the Synthesis of Prebiotic Molecules.

    PubMed

    Ranjan, Sukrit; Sasselov, Dimitar D

    2016-01-01

    Ultraviolet radiation is common to most planetary environments and could play a key role in the chemistry of molecules relevant to abiogenesis (prebiotic chemistry). In this work, we explore the impact of UV light on prebiotic chemistry that might occur in liquid water on the surface of a planet with an atmosphere. We consider effects including atmospheric absorption, attenuation by water, and stellar variability to constrain the UV input as a function of wavelength. We conclude that the UV environment would be characterized by broadband input, and wavelengths below 204 nm and 168 nm would be shielded out by atmospheric CO2 and water, respectively. We compare this broadband prebiotic UV input to the narrowband UV sources (e.g., mercury lamps) often used in laboratory studies of prebiotic chemistry and explore the implications for the conclusions drawn from these experiments. We consider as case studies the ribonucleotide synthesis pathway of Powner et al. (2009) and the sugar synthesis pathway of Ritson and Sutherland (2012). Irradiation by narrowband UV light from a mercury lamp formed an integral component of these studies; we quantitatively explore the impact of more realistic UV input on the conclusions that can be drawn from these experiments. Finally, we explore the constraints solar UV input places on the buildup of prebiotically important feedstock gasses like CH4 and HCN. Our results demonstrate the importance of characterizing the wavelength dependence (action spectra) of prebiotic synthesis pathways to determine how pathways derived under laboratory irradiation conditions will function under planetary prebiotic conditions.

  10. Patterns in the Physical, Chemical, and Biological Composition of Icelandic Lakes and the Dominant Factors Controlling Variability Across Watersheds

    NASA Astrophysics Data System (ADS)

    Greco, A.; Strock, K.; Edwards, B. R.

    2017-12-01

    Fourteen lakes were sampled in the southern and western area of Iceland in June of 2017. The southern systems, within the Eastern Volcanic Zone, have minimal soil development and active volcanoes that produce ash input to lakes. Lakes in the Western Volcanic Zone were more diverse and located in older bedrock with more extensively weathered soil. Physical variables (temperature, oxygen concentration, and water clarity), chemical variables (pH, conductivity, dissolved and total nitrogen and phosphorus concentrations, and dissolved organic carbon concentration), and biological variables (algal biomass) were compared across the lakes sampled in these geographic regions. There was a large range in lake characteristics, including five to eighteen times higher algal biomass in the southern systems that experience active ash input to lakes. The lakes located in the Eastern Volcanic Zone also had higher conductivity and lower pH, especially in systems receiving substantial geothermal input. These results were analyzed in the context of more extensive lake sampling efforts across Iceland (46 lakes) to determine defining characteristics of lakes in each region and to identify variables that drive heterogeneous patterns in physical, chemical, and biological lake features within each region. Coastal systems, characterized by high conductivity, and glacially-fed systems, characterized by high iron concentrations, were unique from lakes in all other regions. Clustering and principal component analyses revealed that lake type (plateau, valley, spring-fed, and direct-runoff) was not the primary factor explaining variability in lake chemistry outside of the coastal and glacial lake types. Instead, lakes differentiated along a gradient of iron concentration and total nitrogen concentration. The physical and chemical properties of subarctic lakes are especially susceptible to both natural and human-induced environmental impacts. However, relatively little is known about the contemporary physical and chemical properties of Icelandic lakes, despite their abundance and importance as freshwater resources. Here we report an analysis of the physical, chemical, and biological characteristics of a set of subarctic lakes and use spatial Information to infer controls on lake heterogeneity within and across regions.

  11. Stochastic Modeling of Radioactive Material Releases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason; Pope, Chad

    2015-09-01

    Nonreactor nuclear facilities operated under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA was developed using the MATLAB coding framework. The software application has a graphical user input. SODA can be installed on both Windows and Mac computers and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC, rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The work was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  12. Model parameter uncertainty analysis for an annual field-scale P loss model

    NASA Astrophysics Data System (ADS)

    Bolster, Carl H.; Vadas, Peter A.; Boykin, Debbie

    2016-08-01

    Phosphorous (P) fate and transport models are important tools for developing and evaluating conservation practices aimed at reducing P losses from agricultural fields. Because all models are simplifications of complex systems, there will exist an inherent amount of uncertainty associated with their predictions. It is therefore important that efforts be directed at identifying, quantifying, and communicating the different sources of model uncertainties. In this study, we conducted an uncertainty analysis with the Annual P Loss Estimator (APLE) model. Our analysis included calculating parameter uncertainties and confidence and prediction intervals for five internal regression equations in APLE. We also estimated uncertainties of the model input variables based on values reported in the literature. We then predicted P loss for a suite of fields under different management and climatic conditions while accounting for uncertainties in the model parameters and inputs and compared the relative contributions of these two sources of uncertainty to the overall uncertainty associated with predictions of P loss. Both the overall magnitude of the prediction uncertainties and the relative contributions of the two sources of uncertainty varied depending on management practices and field characteristics. This was due to differences in the number of model input variables and the uncertainties in the regression equations associated with each P loss pathway. Inspection of the uncertainties in the five regression equations brought attention to a previously unrecognized limitation with the equation used to partition surface-applied fertilizer P between leaching and runoff losses. As a result, an alternate equation was identified that provided similar predictions with much less uncertainty. Our results demonstrate how a thorough uncertainty and model residual analysis can be used to identify limitations with a model. Such insight can then be used to guide future data collection and model development and evaluation efforts.

  13. Partial Granger causality--eliminating exogenous inputs and latent variables.

    PubMed

    Guo, Shuixia; Seth, Anil K; Kendrick, Keith M; Zhou, Cong; Feng, Jianfeng

    2008-07-15

    Attempts to identify causal interactions in multivariable biological time series (e.g., gene data, protein data, physiological data) can be undermined by the confounding influence of environmental (exogenous) inputs. Compounding this problem, we are commonly only able to record a subset of all related variables in a system. These recorded variables are likely to be influenced by unrecorded (latent) variables. To address this problem, we introduce a novel variant of a widely used statistical measure of causality--Granger causality--that is inspired by the definition of partial correlation. Our 'partial Granger causality' measure is extensively tested with toy models, both linear and nonlinear, and is applied to experimental data: in vivo multielectrode array (MEA) local field potentials (LFPs) recorded from the inferotemporal cortex of sheep. Our results demonstrate that partial Granger causality can reveal the underlying interactions among elements in a network in the presence of exogenous inputs and latent variables in many cases where the existing conditional Granger causality fails.

  14. Variability, drivers, and effects of atmospheric nitrogen inputs across an urban area: Emerging patterns among human activities, the atmosphere, and soils.

    PubMed

    Decina, Stephen M; Templer, Pamela H; Hutyra, Lucy R; Gately, Conor K; Rao, Preeti

    2017-12-31

    Atmospheric deposition of nitrogen (N) is a major input of N to the biosphere and is elevated beyond preindustrial levels throughout many ecosystems. Deposition monitoring networks in the United States generally avoid urban areas in order to capture regional patterns of N deposition, and studies measuring N deposition in cities usually include only one or two urban sites in an urban-rural comparison or as an anchor along an urban-to-rural gradient. Describing patterns and drivers of atmospheric N inputs is crucial for understanding the effects of N deposition; however, little is known about the variability and drivers of atmospheric N inputs or their effects on soil biogeochemistry within urban ecosystems. We measured rates of canopy throughfall N as a measure of atmospheric N inputs, as well as soil net N mineralization and nitrification, soil solution N, and soil respiration at 15 sites across the greater Boston, Massachusetts area. Rates of throughfall N are 8.70±0.68kgNha -1 yr -1 , vary 3.5-fold across sites, and are positively correlated with rates of local vehicle N emissions. Ammonium (NH 4 + ) composes 69.9±2.2% of inorganic throughfall N inputs and is highest in late spring, suggesting a contribution from local fertilizer inputs. Soil solution NO 3 - is positively correlated with throughfall NO 3 - inputs. In contrast, soil solution NH 4 + , net N mineralization, nitrification, and soil respiration are not correlated with rates of throughfall N inputs. Rather, these processes are correlated with soil properties such as soil organic matter. Our results demonstrate high variability in rates of urban throughfall N inputs, correlation of throughfall N inputs with local vehicle N emissions, and a decoupling of urban soil biogeochemistry and throughfall N inputs. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Composition of riparian litter input regulates organic matter decomposition: Implications for headwater stream functioning in a managed forest landscape.

    PubMed

    Lidman, Johan; Jonsson, Micael; Burrows, Ryan M; Bundschuh, Mirco; Sponseller, Ryan A

    2017-02-01

    Although the importance of stream condition for leaf litter decomposition has been extensively studied, little is known about how processing rates change in response to altered riparian vegetation community composition. We investigated patterns of plant litter input and decomposition across 20 boreal headwater streams that varied in proportions of riparian deciduous and coniferous trees. We measured a suite of in-stream physical and chemical characteristics, as well as the amount and type of litter inputs from riparian vegetation, and related these to decomposition rates of native (alder, birch, and spruce) and introduced (lodgepole pine) litter species incubated in coarse- and fine-mesh bags. Total litter inputs ranged more than fivefold among sites and increased with the proportion of deciduous vegetation in the riparian zone. In line with differences in initial litter quality, mean decomposition rate was highest for alder, followed by birch, spruce, and lodgepole pine (12, 55, and 68% lower rates, respectively). Further, these rates were greater in coarse-mesh bags that allow colonization by macroinvertebrates. Variance in decomposition rate among sites for different species was best explained by different sets of environmental conditions, but litter-input composition (i.e., quality) was overall highly important. On average, native litter decomposed faster in sites with higher-quality litter input and (with the exception of spruce) higher concentrations of dissolved nutrients and open canopies. By contrast, lodgepole pine decomposed more rapidly in sites receiving lower-quality litter inputs. Birch litter decomposition rate in coarse-mesh bags was best predicted by the same environmental variables as in fine-mesh bags, with additional positive influences of macroinvertebrate species richness. Hence, to facilitate energy turnover in boreal headwaters, forest management with focus on conifer production should aim at increasing the presence of native deciduous trees along streams, as they promote conditions that favor higher decomposition rates of terrestrial plant litter.

  16. Hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) and its application to predicting key process variables.

    PubMed

    He, Yan-Lin; Xu, Yuan; Geng, Zhi-Qiang; Zhu, Qun-Xiong

    2016-03-01

    In this paper, a hybrid robust model based on an improved functional link neural network integrating with partial least square (IFLNN-PLS) is proposed. Firstly, an improved functional link neural network with small norm of expanded weights and high input-output correlation (SNEWHIOC-FLNN) was proposed for enhancing the generalization performance of FLNN. Unlike the traditional FLNN, the expanded variables of the original inputs are not directly used as the inputs in the proposed SNEWHIOC-FLNN model. The original inputs are attached to some small norm of expanded weights. As a result, the correlation coefficient between some of the expanded variables and the outputs is enhanced. The larger the correlation coefficient is, the more relevant the expanded variables tend to be. In the end, the expanded variables with larger correlation coefficient are selected as the inputs to improve the performance of the traditional FLNN. In order to test the proposed SNEWHIOC-FLNN model, three UCI (University of California, Irvine) regression datasets named Housing, Concrete Compressive Strength (CCS), and Yacht Hydro Dynamics (YHD) are selected. Then a hybrid model based on the improved FLNN integrating with partial least square (IFLNN-PLS) was built. In IFLNN-PLS model, the connection weights are calculated using the partial least square method but not the error back propagation algorithm. Lastly, IFLNN-PLS was developed as an intelligent measurement model for accurately predicting the key variables in the Purified Terephthalic Acid (PTA) process and the High Density Polyethylene (HDPE) process. Simulation results illustrated that the IFLNN-PLS could significant improve the prediction performance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Dynamic modal estimation using instrumental variables

    NASA Technical Reports Server (NTRS)

    Salzwedel, H.

    1980-01-01

    A method to determine the modes of dynamical systems is described. The inputs and outputs of a system are Fourier transformed and averaged to reduce the error level. An instrumental variable method that estimates modal parameters from multiple correlations between responses of single input, multiple output systems is applied to estimate aircraft, spacecraft, and off-shore platform modal parameters.

  18. Urban vs. Rural CLIL: An Analysis of Input-Related Variables, Motivation and Language Attainment

    ERIC Educational Resources Information Center

    Alejo, Rafael; Piquer-Píriz, Ana

    2016-01-01

    The present article carries out an in-depth analysis of the differences in motivation, input-related variables and linguistic attainment of the students at two content and language integrated learning (CLIL) schools operating within the same institutional and educational context, the Spanish region of Extremadura, and differing only in terms of…

  19. Variable Input and the Acquisition of Plural Morphology

    ERIC Educational Resources Information Center

    Miller, Karen L.; Schmitt, Cristina

    2012-01-01

    The present article examines the effect of variable input on the acquisition of plural morphology in two varieties of Spanish: Chilean Spanish, where the plural marker is sometimes omitted due to a phonological process of syllable final /s/ lenition, and Mexican Spanish (of Mexico City), with no such lenition process. The goal of the study is to…

  20. Precision digital pulse phase generator

    DOEpatents

    McEwan, T.E.

    1996-10-08

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code. 2 figs.

  1. Precision digital pulse phase generator

    DOEpatents

    McEwan, Thomas E.

    1996-01-01

    A timing generator comprises a crystal oscillator connected to provide an output reference pulse. A resistor-capacitor combination is connected to provide a variable-delay output pulse from an input connected to the crystal oscillator. A phase monitor is connected to provide duty-cycle representations of the reference and variable-delay output pulse phase. An operational amplifier drives a control voltage to the resistor-capacitor combination according to currents integrated from the phase monitor and injected into summing junctions. A digital-to-analog converter injects a control current into the summing junctions according to an input digital control code. A servo equilibrium results that provides a phase delay of the variable-delay output pulse to the output reference pulse that linearly depends on the input digital control code.

  2. The magnitude of variability produced by methods used to estimate annual stormwater contaminant loads for highly urbanised catchments.

    PubMed

    Beck, H J; Birch, G F

    2013-06-01

    Stormwater contaminant loading estimates using event mean concentration (EMC), rainfall/runoff relationship calculations and computer modelling (Model of Urban Stormwater Infrastructure Conceptualisation--MUSIC) demonstrated high variability in common methods of water quality assessment. Predictions of metal, nutrient and total suspended solid loadings for three highly urbanised catchments in Sydney estuary, Australia, varied greatly within and amongst methods tested. EMC and rainfall/runoff relationship calculations produced similar estimates (within 1 SD) in a statistically significant number of trials; however, considerable variability within estimates (∼50 and ∼25 % relative standard deviation, respectively) questions the reliability of these methods. Likewise, upper and lower default inputs in a commonly used loading model (MUSIC) produced an extensive range of loading estimates (3.8-8.3 times above and 2.6-4.1 times below typical default inputs, respectively). Default and calibrated MUSIC simulations produced loading estimates that agreed with EMC and rainfall/runoff calculations in some trials (4-10 from 18); however, they were not frequent enough to statistically infer that these methods produced the same results. Great variance within and amongst mean annual loads estimated by common methods of water quality assessment has important ramifications for water quality managers requiring accurate estimates of the quantities and nature of contaminants requiring treatment.

  3. Job satisfaction among mental healthcare professionals: The respective contributions of professional characteristics, team attributes, team processes, and team emergent states

    PubMed Central

    Fleury, Marie-Josée; Grenier, Guy; Bamvita, Jean-Marie

    2017-01-01

    Objectives: The aim of this study was to determine the respective contribution of professional characteristics, team attributes, team processes, and team emergent states on the job satisfaction of 315 mental health professionals from Quebec (Canada). Methods: Job satisfaction was measured with the Job Satisfaction Survey. Independent variables were organized into four categories according to a conceptual framework inspired from the Input-Mediator-Outcomes-Input Model. The contribution of each category of variables was assessed using hierarchical regression analysis. Results: Variations in job satisfaction were mostly explained by team processes, with minimal contribution from the other three categories. Among the six variables significantly associated with job satisfaction in the final model, four were team processes: stronger team support, less team conflict, deeper involvement in the decision-making process, and more team collaboration. Job satisfaction was also associated with nursing and, marginally, male gender (professional characteristics) as well as with a stronger affective commitment toward the team (team emergent states). Discussion and Conclusion: Results confirm the importance for health managers of offering adequate support to mental health professionals, and creating an environment favorable to collaboration and decision-sharing, and likely to reduce conflicts between team members. PMID:29276591

  4. Processing Pipeline of Sugarcane Spectral Response to Characterize the Fallen Plants Phenomenon

    NASA Astrophysics Data System (ADS)

    Solano, Agustín; Kemerer, Alejandra; Hadad, Alejandro

    2016-04-01

    Nowadays, in agronomic systems it is possible to make a variable management of inputs to improve the efficiency of agronomic industry and optimize the logistics of the harvesting process. In this way, it was proposed for sugarcane culture the use of remote sensing tools and computational methods to identify useful areas in the cultivated lands. The objective was to use these areas to make variable management of the crop. When at the moment of harvesting the sugarcane there are fallen stalks, together with them some strange material (vegetal or mineral) is collected. This strange material is not millable and when it enters onto the sugar mill it causes important looses of efficiency in the sugar extraction processes and affects its quality. Considering this issue, the spectral response of sugarcane plants in aerial multispectral images was studied. The spectral response was analyzed in different bands of the electromagnetic spectrum. Then, the aerial images were segmented to obtain homogeneous regions useful for producers to make decisions related to the use of inputs and resources according to the variability of the system (existence of fallen cane and standing cane). The obtained segmentation results were satisfactory. It was possible to identify regions with fallen cane and regions with standing cane with high precision rates.

  5. Regenerative braking device with rotationally mounted energy storage means

    DOEpatents

    Hoppie, Lyle O.

    1982-03-16

    A regenerative braking device for an automotive vehicle includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (30) and an output shaft (32), clutches (50, 56) and brakes (52, 58) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. In a second embodiment the clutches and brakes are dispensed with and the variable ratio transmission is connected directly across the input and output shafts. In both embodiments the rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft rotates faster or relative to the output shaft and are torsionally relaxed to deliver energy to the vehicle when the output shaft rotates faster or relative to the input shaft.

  6. Analysis of extreme values of the economic efficiency indicators of transport infrastructure projects

    NASA Astrophysics Data System (ADS)

    Korytárová, J.; Vaňková, L.

    2017-10-01

    Paper builds on previous research of the authors into the evaluation of economic efficiency of transport infrastructure projects evaluated by the economic efficiency ratio - NPV, IRR and BCR. Values of indicators and subsequent outputs of the sensitivity analysis show extremely favourable values in some cases. The authors dealt with the analysis of these indicators down to the level of the input variables and examined which inputs have a larger share of these extreme values. NCF for the calculation of above mentioned ratios is created by benefits that arise as the difference between zero and investment options of the project (savings in travel and operating costs, savings in travel time costs, reduction in accident costs and savings in exogenous costs) as well as total agency costs. Savings in travel time costs which contribute to the overall utility of projects by more than 70% appear to be the most important benefits in the long term horizon. This is the reason why this benefit emphasized. The outcome of the article has resulted how the particular basic variables contributed to the total robustness of economic efficiency of these project.

  7. GEWEX SRB Shortwave Release 4

    NASA Astrophysics Data System (ADS)

    Cox, S. J.; Stackhouse, P. W., Jr.; Mikovitz, J. C.; Zhang, T.

    2017-12-01

    The NASA/GEWEX Surface Radiation Budget (SRB) project produces shortwave and longwave surface and top of atmosphere radiative fluxes for the 1983-near present time period. Spatial resolution is 1 degree. The new Release 4 uses the newly processed ISCCP HXS product as its primary input for cloud and radiance data. The ninefold increase in pixel number compared to the previous ISCCP DX allows finer gradations in cloud fraction in each grid box. It will also allow higher spatial resolutions (0.5 degree) in future releases. In addition to the input data improvements, several important algorithm improvements have been made since Release 3. These include recalculated atmospheric transmissivities and reflectivities yielding a less transmissive atmosphere. The calculations also include variable aerosol composition, allowing for the use of a detailed aerosol history from the Max Planck Institut Aerosol Climatology (MAC). Ocean albedo and snow/ice albedo are also improved from Release 3. Total solar irradiance is now variable, averaging 1361 Wm-2. Water vapor is taken from ISCCP's nnHIRS product. Results from GSW Release 4 are presented and analyzed. Early comparison to surface measurements show improved agreement.

  8. The SYSGEN user package

    NASA Technical Reports Server (NTRS)

    Carlson, C. R.

    1981-01-01

    The user documentation of the SYSGEN model and its links with other simulations is described. The SYSGEN is a production costing and reliability model of electric utility systems. Hydroelectric, storage, and time dependent generating units are modeled in addition to conventional generating plants. Input variables, modeling options, output variables, and reports formats are explained. SYSGEN also can be run interactively by using a program called FEPS (Front End Program for SYSGEN). A format for SYSGEN input variables which is designed for use with FEPS is presented.

  9. The effect of information technology on hospital performance.

    PubMed

    Williams, Cynthia; Asi, Yara; Raffenaud, Amanda; Bagwell, Matt; Zeini, Ibrahim

    2016-12-01

    While healthcare entities have integrated various forms of health information technology (HIT) into their systems due to claims of increased quality and decreased costs, as well as various incentives, there is little available information about which applications of HIT are actually the most beneficial and efficient. In this study, we aim to assist administrators in understanding the characteristics of top performing hospitals. We utilized data from the Health Information and Management Systems Society and the Center for Medicare and Medicaid to assess 1039 hospitals. Inputs considered were full time equivalents, hospital size, and technology inputs. Technology inputs included personal health records (PHR), electronic medical records (EMRs), computerized physician order entry systems (CPOEs), and electronic access to diagnostic results. Output variables were measures of quality, hospital readmission and mortality rate. The analysis was conducted in a two-stage methodology: Data Envelopment Analysis (DEA) and Automatic Interaction Detector Analysis (AID), decision tree regression (DTreg). Overall, we found that electronic access to diagnostic results systems was the most influential technological characteristics; however organizational characteristics were more important than technological inputs. Hospitals that had the highest levels of quality indicated no excess in the use of technology input, averaging one use of a technology component. This study indicates that prudent consideration of organizational characteristics and technology is needed before investing in innovative programs.

  10. Tritium Records to Trace Stratospheric Moisture Inputs in Antarctica

    NASA Astrophysics Data System (ADS)

    Fourré, E.; Landais, A.; Cauquoin, A.; Jean-Baptiste, P.; Lipenkov, V.; Petit, J.-R.

    2018-03-01

    Better assessing the dynamic of stratosphere-troposphere exchange is a key point to improve our understanding of the climate dynamic in the East Antarctica Plateau, a region where stratospheric inputs are expected to be important. Although tritium (3H or T), a nuclide naturally produced mainly in the stratosphere and rapidly entering the water cycle as HTO, seems a first-rate tracer to study these processes, tritium data are very sparse in this region. We present the first high-resolution measurements of tritium concentration over the last 50 years in three snow pits drilled at the Vostok station. Natural variability of the tritium records reveals two prominent frequencies, one at about 10 years (to be related to the solar Schwabe cycles) and the other one at a shorter periodicity: despite dating uncertainty at this short scale, a good correlation is observed between 3H and Na+ and an anticorrelation between 3H and δ18O measured on an individual pit. The outputs from the LMDZ Atmospheric General Circulation Model including stable water isotopes and tritium show the same 3H-δ18O anticorrelation and allow further investigation on the associated mechanism. At the interannual scale, the modeled 3H variability matches well with the Southern Annular Mode index. At the seasonal scale, we show that modeled stratospheric tritium inputs in the troposphere are favored in winter cold and dry conditions.

  11. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  12. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  13. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  14. 12 CFR Appendix A to Subpart B of... - Risk-Based Capital Test Methodology and Specifications

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ....3.2, Mortgage Amortization Schedule Inputs 3-32, Loan Group Inputs for Mortgage Amortization... Prepayment Explanatory Variables F 3.6.3.5.2, Multifamily Default and Prepayment Inputs 3-38, Loan Group... Group inputs for Gross Loss Severity F 3.3.4, Interest Rates Outputs3.6.3.3.4, Mortgage Amortization...

  15. Application of Monte Carlo Methods to Perform Uncertainty and Sensitivity Analysis on Inverse Water-Rock Reactions with NETPATH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGraw, David; Hershey, Ronald L.

    Methods were developed to quantify uncertainty and sensitivity for NETPATH inverse water-rock reaction models and to calculate dissolved inorganic carbon, carbon-14 groundwater travel times. The NETPATH models calculate upgradient groundwater mixing fractions that produce the downgradient target water chemistry along with amounts of mineral phases that are either precipitated or dissolved. Carbon-14 groundwater travel times are calculated based on the upgradient source-water fractions, carbonate mineral phase changes, and isotopic fractionation. Custom scripts and statistical code were developed for this study to facilitate modifying input parameters, running the NETPATH simulations, extracting relevant output, postprocessing the results, and producing graphs and summaries.more » The scripts read userspecified values for each constituent’s coefficient of variation, distribution, sensitivity parameter, maximum dissolution or precipitation amounts, and number of Monte Carlo simulations. Monte Carlo methods for analysis of parametric uncertainty assign a distribution to each uncertain variable, sample from those distributions, and evaluate the ensemble output. The uncertainty in input affected the variability of outputs, namely source-water mixing, phase dissolution and precipitation amounts, and carbon-14 travel time. Although NETPATH may provide models that satisfy the constraints, it is up to the geochemist to determine whether the results are geochemically reasonable. Two example water-rock reaction models from previous geochemical reports were considered in this study. Sensitivity analysis was also conducted to evaluate the change in output caused by a small change in input, one constituent at a time. Results were standardized to allow for sensitivity comparisons across all inputs, which results in a representative value for each scenario. The approach yielded insight into the uncertainty in water-rock reactions and travel times. For example, there was little variation in source-water fraction between the deterministic and Monte Carlo approaches, and therefore, little variation in travel times between approaches. Sensitivity analysis proved very useful for identifying the most important input constraints (dissolved-ion concentrations), which can reveal the variables that have the most influence on source-water fractions and carbon-14 travel times. Once these variables are determined, more focused effort can be applied to determining the proper distribution for each constraint. Second, Monte Carlo results for water-rock reaction modeling showed discrete and nonunique results. The NETPATH models provide the solutions that satisfy the constraints of upgradient and downgradient water chemistry. There can exist multiple, discrete solutions for any scenario and these discrete solutions cause grouping of results. As a result, the variability in output may not easily be represented by a single distribution or a mean and variance and care should be taken in the interpretation and reporting of results.« less

  16. Tracing anthropogenic inputs to production in the Seto Inland Sea, Japan--a stable isotope approach.

    PubMed

    Miller, Todd W; Omori, Koji; Hamaoka, Hideki; Shibata, Jun-ya; Hidejiro, Onishi

    2010-10-01

    The Seto Inland Sea (SIS) receives waste runoff from ∼24% of Japan's total population, yet it is also important in regional fisheries, recreation and commerce. During August 2006 we measured carbon and nitrogen stable isotopes of particulate organic matter (POM) and zooplankton across urban population gradients of the SIS. Results showed a consistent trend of increasing δ(15)N in POM and zooplankton from the western to eastern subsystems of the SIS, corresponding to increasing population load. Principal components analysis of environmental variables indicated high positive loadings of δ(15)N and δ(13)C with high chlorophyll-a and surface water temperatures, and negative loadings of low salinities related to inputs from large rivers and high urban development in the eastern SIS. Anthropogenic nitrogen was therefore readily integrated into the SIS food web from primary production to copepods, which are a critical food source for many commercially important fishes. Copyright © 2010 Elsevier Ltd. All rights reserved.

  17. The Effect of Visual Variability on the Learning of Academic Concepts.

    PubMed

    Bourgoyne, Ashley; Alt, Mary

    2017-06-10

    The purpose of this study was to identify effects of variability of visual input on development of conceptual representations of academic concepts for college-age students with normal language (NL) and those with language-learning disabilities (LLD). Students with NL (n = 11) and LLD (n = 11) participated in a computer-based training for introductory biology course concepts. Participants were trained on half the concepts under a low-variability condition and half under a high-variability condition. Participants completed a posttest in which they were asked to identify and rate the accuracy of novel and trained visual representations of the concepts. We performed separate repeated measures analyses of variance to examine the accuracy of identification and ratings. Participants were equally accurate on trained and novel items in the high-variability condition, but were less accurate on novel items only in the low-variability condition. The LLD group showed the same pattern as the NL group; they were just less accurate. Results indicated that high-variability visual input may facilitate the acquisition of academic concepts in college students with NL and LLD. High-variability visual input may be especially beneficial for generalization to novel representations of concepts. Implicit learning methods may be harnessed by college courses to provide students with basic conceptual knowledge when they are entering courses or beginning new units.

  18. Revealing unobserved factors underlying cortical activity with a rectified latent variable model applied to neural population recordings.

    PubMed

    Whiteway, Matthew R; Butts, Daniel A

    2017-03-01

    The activity of sensory cortical neurons is not only driven by external stimuli but also shaped by other sources of input to the cortex. Unlike external stimuli, these other sources of input are challenging to experimentally control, or even observe, and as a result contribute to variability of neural responses to sensory stimuli. However, such sources of input are likely not "noise" and may play an integral role in sensory cortex function. Here we introduce the rectified latent variable model (RLVM) in order to identify these sources of input using simultaneously recorded cortical neuron populations. The RLVM is novel in that it employs nonnegative (rectified) latent variables and is much less restrictive in the mathematical constraints on solutions because of the use of an autoencoder neural network to initialize model parameters. We show that the RLVM outperforms principal component analysis, factor analysis, and independent component analysis, using simulated data across a range of conditions. We then apply this model to two-photon imaging of hundreds of simultaneously recorded neurons in mouse primary somatosensory cortex during a tactile discrimination task. Across many experiments, the RLVM identifies latent variables related to both the tactile stimulation as well as nonstimulus aspects of the behavioral task, with a majority of activity explained by the latter. These results suggest that properly identifying such latent variables is necessary for a full understanding of sensory cortical function and demonstrate novel methods for leveraging large population recordings to this end. NEW & NOTEWORTHY The rapid development of neural recording technologies presents new opportunities for understanding patterns of activity across neural populations. Here we show how a latent variable model with appropriate nonlinear form can be used to identify sources of input to a neural population and infer their time courses. Furthermore, we demonstrate how these sources are related to behavioral contexts outside of direct experimental control. Copyright © 2017 the American Physiological Society.

  19. Complex, Dynamic Combination of Physical, Chemical and Nutritional Variables Controls Spatio-Temporal Variation of Sandy Beach Community Structure

    PubMed Central

    Ortega Cisneros, Kelly; Smit, Albertus J.; Laudien, Jürgen; Schoeman, David S.

    2011-01-01

    Sandy beach ecological theory states that physical features of the beach control macrobenthic community structure on all but the most dissipative beaches. However, few studies have simultaneously evaluated the relative importance of physical, chemical and biological factors as potential explanatory variables for meso-scale spatio-temporal patterns of intertidal community structure in these systems. Here, we investigate macroinfaunal community structure of a micro-tidal sandy beach that is located on an oligotrophic subtropical coast and is influenced by seasonal estuarine input. We repeatedly sampled biological and environmental variables at a series of beach transects arranged at increasing distances from the estuary mouth. Sampling took place over a period of five months, corresponding with the transition between the dry and wet season. This allowed assessment of biological-physical relationships across chemical and nutritional gradients associated with a range of estuarine inputs. Physical, chemical, and biological response variables, as well as measures of community structure, showed significant spatio-temporal patterns. In general, bivariate relationships between biological and environmental variables were rare and weak. However, multivariate correlation approaches identified a variety of environmental variables (i.e., sampling session, the C∶N ratio of particulate organic matter, dissolved inorganic nutrient concentrations, various size fractions of photopigment concentrations, salinity and, to a lesser extent, beach width and sediment kurtosis) that either alone or combined provided significant explanatory power for spatio-temporal patterns of macroinfaunal community structure. Overall, these results showed that the macrobenthic community on Mtunzini Beach was not structured primarily by physical factors, but instead by a complex and dynamic blend of nutritional, chemical and physical drivers. This emphasises the need to recognise ocean-exposed sandy beaches as functional ecosystems in their own right. PMID:21858213

  20. Complex, dynamic combination of physical, chemical and nutritional variables controls spatio-temporal variation of sandy beach community structure.

    PubMed

    Ortega Cisneros, Kelly; Smit, Albertus J; Laudien, Jürgen; Schoeman, David S

    2011-01-01

    Sandy beach ecological theory states that physical features of the beach control macrobenthic community structure on all but the most dissipative beaches. However, few studies have simultaneously evaluated the relative importance of physical, chemical and biological factors as potential explanatory variables for meso-scale spatio-temporal patterns of intertidal community structure in these systems. Here, we investigate macroinfaunal community structure of a micro-tidal sandy beach that is located on an oligotrophic subtropical coast and is influenced by seasonal estuarine input. We repeatedly sampled biological and environmental variables at a series of beach transects arranged at increasing distances from the estuary mouth. Sampling took place over a period of five months, corresponding with the transition between the dry and wet season. This allowed assessment of biological-physical relationships across chemical and nutritional gradients associated with a range of estuarine inputs. Physical, chemical, and biological response variables, as well as measures of community structure, showed significant spatio-temporal patterns. In general, bivariate relationships between biological and environmental variables were rare and weak. However, multivariate correlation approaches identified a variety of environmental variables (i.e., sampling session, the C∶N ratio of particulate organic matter, dissolved inorganic nutrient concentrations, various size fractions of photopigment concentrations, salinity and, to a lesser extent, beach width and sediment kurtosis) that either alone or combined provided significant explanatory power for spatio-temporal patterns of macroinfaunal community structure. Overall, these results showed that the macrobenthic community on Mtunzini Beach was not structured primarily by physical factors, but instead by a complex and dynamic blend of nutritional, chemical and physical drivers. This emphasises the need to recognise ocean-exposed sandy beaches as functional ecosystems in their own right.

  1. Comparison of climate envelope models developed using expert-selected variables versus statistical selection

    USGS Publications Warehouse

    Brandt, Laura A.; Benscoter, Allison; Harvey, Rebecca G.; Speroterra, Carolina; Bucklin, David N.; Romañach, Stephanie; Watling, James I.; Mazzotti, Frank J.

    2017-01-01

    Climate envelope models are widely used to describe potential future distribution of species under different climate change scenarios. It is broadly recognized that there are both strengths and limitations to using climate envelope models and that outcomes are sensitive to initial assumptions, inputs, and modeling methods Selection of predictor variables, a central step in modeling, is one of the areas where different techniques can yield varying results. Selection of climate variables to use as predictors is often done using statistical approaches that develop correlations between occurrences and climate data. These approaches have received criticism in that they rely on the statistical properties of the data rather than directly incorporating biological information about species responses to temperature and precipitation. We evaluated and compared models and prediction maps for 15 threatened or endangered species in Florida based on two variable selection techniques: expert opinion and a statistical method. We compared model performance between these two approaches for contemporary predictions, and the spatial correlation, spatial overlap and area predicted for contemporary and future climate predictions. In general, experts identified more variables as being important than the statistical method and there was low overlap in the variable sets (<40%) between the two methods Despite these differences in variable sets (expert versus statistical), models had high performance metrics (>0.9 for area under the curve (AUC) and >0.7 for true skill statistic (TSS). Spatial overlap, which compares the spatial configuration between maps constructed using the different variable selection techniques, was only moderate overall (about 60%), with a great deal of variability across species. Difference in spatial overlap was even greater under future climate projections, indicating additional divergence of model outputs from different variable selection techniques. Our work is in agreement with other studies which have found that for broad-scale species distribution modeling, using statistical methods of variable selection is a useful first step, especially when there is a need to model a large number of species or expert knowledge of the species is limited. Expert input can then be used to refine models that seem unrealistic or for species that experts believe are particularly sensitive to change. It also emphasizes the importance of using multiple models to reduce uncertainty and improve map outputs for conservation planning. Where outputs overlap or show the same direction of change there is greater certainty in the predictions. Areas of disagreement can be used for learning by asking why the models do not agree, and may highlight areas where additional on-the-ground data collection could improve the models.

  2. Not All Children Agree: Acquisition of Agreement when the Input Is Variable

    ERIC Educational Resources Information Center

    Miller, Karen

    2012-01-01

    In this paper we investigate the effect of variable input on the acquisition of grammar. More specifically, we examine the acquisition of the third person singular marker -s on the auxiliary "do" in comprehension and production in two groups of children who are exposed to similar varieties of English but that differ with respect to adult…

  3. Using global sensitivity analysis of demographic models for ecological impact assessment.

    PubMed

    Aiello-Lammens, Matthew E; Akçakaya, H Resit

    2017-02-01

    Population viability analysis (PVA) is widely used to assess population-level impacts of environmental changes on species. When combined with sensitivity analysis, PVA yields insights into the effects of parameter and model structure uncertainty. This helps researchers prioritize efforts for further data collection so that model improvements are efficient and helps managers prioritize conservation and management actions. Usually, sensitivity is analyzed by varying one input parameter at a time and observing the influence that variation has over model outcomes. This approach does not account for interactions among parameters. Global sensitivity analysis (GSA) overcomes this limitation by varying several model inputs simultaneously. Then, regression techniques allow measuring the importance of input-parameter uncertainties. In many conservation applications, the goal of demographic modeling is to assess how different scenarios of impact or management cause changes in a population. This is challenging because the uncertainty of input-parameter values can be confounded with the effect of impacts and management actions. We developed a GSA method that separates model outcome uncertainty resulting from parameter uncertainty from that resulting from projected ecological impacts or simulated management actions, effectively separating the 2 main questions that sensitivity analysis asks. We applied this method to assess the effects of predicted sea-level rise on Snowy Plover (Charadrius nivosus). A relatively small number of replicate models (approximately 100) resulted in consistent measures of variable importance when not trying to separate the effects of ecological impacts from parameter uncertainty. However, many more replicate models (approximately 500) were required to separate these effects. These differences are important to consider when using demographic models to estimate ecological impacts of management actions. © 2016 Society for Conservation Biology.

  4. Nutrient delivery to Lake Winnipeg from the Red-Assiniboine River Basin – A binational application of the SPARROW model

    USGS Publications Warehouse

    Benoy, Glenn A.; Jenkinson, R. Wayne; Robertson, Dale M.; Saad, David A.

    2016-01-01

    Excessive phosphorus (TP) and nitrogen (TN) inputs from the Red–Assiniboine River Basin (RARB) have been linked to eutrophication of Lake Winnipeg; therefore, it is important for the management of water resources to understand where and from what sources these nutrients originate. The RARB straddles the Canada–United States border and includes portions of two provinces and three states. This study represents the first binationally focused application of SPAtially Referenced Regressions on Watershed attributes (SPARROW) models to estimate loads and sources of TP and TN by jurisdiction and basin at multiple spatial scales. Major hurdles overcome to develop these models included: (1) harmonization of geospatial data sets, particularly construction of a contiguous stream network; and (2) use of novel calibration steps to accommodate limitations in spatial variability across the model extent and in the number of calibration sites. Using nutrient inputs for a 2002 base year, a RARB TP SPARROW model was calibrated that included inputs from agriculture, forests and wetlands, wastewater treatment plants (WWTPs) and stream channels, and a TN model was calibrated that included inputs from agriculture, WWTPs and atmospheric deposition. At the RARB outlet, downstream from Winnipeg, Manitoba, the majority of the delivered TP and TN came from the Red River Basin (90%), followed by the Upper Assiniboine River and Souris River basins. Agriculture was the single most important TP and TN source for each major basin, province and state. In general, stream channels (historically deposited nutrients and from bank erosion) were the second most important source of TP. Performance metrics for the RARB SPARROW model are similarly robust compared to other, larger US SPARROW models making it a potentially useful tool to address questions of where nutrients originate and their relative contributions to loads delivered to Lake Winnipeg.

  5. Influence of variable selection on partial least squares discriminant analysis models for explosive residue classification

    NASA Astrophysics Data System (ADS)

    De Lucia, Frank C., Jr.; Gottfried, Jennifer L.

    2011-02-01

    Using a series of thirteen organic materials that includes novel high-nitrogen energetic materials, conventional organic military explosives, and benign organic materials, we have demonstrated the importance of variable selection for maximizing residue discrimination with partial least squares discriminant analysis (PLS-DA). We built several PLS-DA models using different variable sets based on laser induced breakdown spectroscopy (LIBS) spectra of the organic residues on an aluminum substrate under an argon atmosphere. The model classification results for each sample are presented and the influence of the variables on these results is discussed. We found that using the whole spectra as the data input for the PLS-DA model gave the best results. However, variables due to the surrounding atmosphere and the substrate contribute to discrimination when the whole spectra are used, indicating this may not be the most robust model. Further iterative testing with additional validation data sets is necessary to determine the most robust model.

  6. Impact damage resistance of composite fuselage structure, part 1

    NASA Technical Reports Server (NTRS)

    Dost, E. F.; Avery, W. B.; Ilcewicz, L. B.; Grande, D. H.; Coxon, B. R.

    1992-01-01

    The impact damage resistance of laminated composite transport aircraft fuselage structures was studied experimentally. A statistically based designed experiment was used to examine numerous material, laminate, structural, and extrinsic (e.g., impactor type) variables. The relative importance and quantitative measure of the effect of each variable and variable interactions on responses including impactor dynamic response, visibility, and internal damage state were determined. The study utilized 32 three-stiffener panels, each with a unique combination of material type, material forms, and structural geometry. Two manufacturing techniques, tow placement and tape lamination, were used to build panels representative of potential fuselage crown, keel, and lower side-panel designs. Various combinations of impactor variables representing various foreign-object-impact threats to the aircraft were examined. Impacts performed at different structural locations within each panel (e.g., skin midbay, stiffener attaching flange, etc.) were considered separate parallel experiments. The relationship between input variables, measured damage states, and structural response to this damage are presented including recommendations for materials and impact test methods for fuselage structure.

  7. Wind-driven Circulation and Freshwater Fluxes off Sri Lanka: 4D-Sampling with Autonomous Gliders

    DTIC Science & Technology

    2015-09-30

    riverine freshwater input, precipitation and atmospheric forcing act to govern Bay of Bengal upper ocean variability, water mass formation and...fraction of the water moving through the section is going south, carrying freshwater out of the Bay of Bengal. Currents near the coast have the same...transport of freshwater from the Northern Bay of Bengal, as well of the import of salty Arabian Sea Water , are being investigated are using all the

  8. Using LiDAR and quickbird data to model plant production and quantify uncertainties associated with wetland detection and land cover generalizations

    USGS Publications Warehouse

    Cook, B.D.; Bolstad, P.V.; Naesset, E.; Anderson, R. Scott; Garrigues, S.; Morisette, J.T.; Nickeson, J.; Davis, K.J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30??m to 1??km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600??ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400??m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine-resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire landscape. Failure to account for wetlands had little impact on landscape-scale estimates, because vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  9. Using LIDAR and Quickbird Data to Model Plant Production and Quantify Uncertainties Associated with Wetland Detection and Land Cover Generalizations

    NASA Technical Reports Server (NTRS)

    Cook, Bruce D.; Bolstad, Paul V.; Naesset, Erik; Anderson, Ryan S.; Garrigues, Sebastian; Morisette, Jeffrey T.; Nickeson, Jaime; Davis, Kenneth J.

    2009-01-01

    Spatiotemporal data from satellite remote sensing and surface meteorology networks have made it possible to continuously monitor global plant production, and to identify global trends associated with land cover/use and climate change. Gross primary production (GPP) and net primary production (NPP) are routinely derived from the MOderate Resolution Imaging Spectroradiometer (MODIS) onboard satellites Terra and Aqua, and estimates generally agree with independent measurements at validation sites across the globe. However, the accuracy of GPP and NPP estimates in some regions may be limited by the quality of model input variables and heterogeneity at fine spatial scales. We developed new methods for deriving model inputs (i.e., land cover, leaf area, and photosynthetically active radiation absorbed by plant canopies) from airborne laser altimetry (LiDAR) and Quickbird multispectral data at resolutions ranging from about 30 m to 1 km. In addition, LiDAR-derived biomass was used as a means for computing carbon-use efficiency. Spatial variables were used with temporal data from ground-based monitoring stations to compute a six-year GPP and NPP time series for a 3600 ha study site in the Great Lakes region of North America. Model results compared favorably with independent observations from a 400 m flux tower and a process-based ecosystem model (BIOME-BGC), but only after removing vapor pressure deficit as a constraint on photosynthesis from the MODIS global algorithm. Fine resolution inputs captured more of the spatial variability, but estimates were similar to coarse-resolution data when integrated across the entire vegetation structure, composition, and conversion efficiencies were similar to upland plant communities. Plant productivity estimates were noticeably improved using LiDAR-derived variables, while uncertainties associated with land cover generalizations and wetlands in this largely forested landscape were considered less important.

  10. Research on the Complexity of Dual-Channel Supply Chain Model in Competitive Retailing Service Market

    NASA Astrophysics Data System (ADS)

    Ma, Junhai; Li, Ting; Ren, Wenbo

    2017-06-01

    This paper examines the optimal decisions of dual-channel game model considering the inputs of retailing service. We analyze how adjustment speed of service inputs affect the system complexity and market performance, and explore the stability of the equilibrium points by parameter basin diagrams. And chaos control is realized by variable feedback method. The numerical simulation shows that complex behavior would trigger the system to become unstable, such as double period bifurcation and chaos. We measure the performances of the model in different periods by analyzing the variation of average profit index. The theoretical results show that the percentage share of the demand and cross-service coefficients have important influence on the stability of the system and its feasible basin of attraction.

  11. Investigating Uncertainty in Predicting Carbon Dynamics in North American Biomes: Putting Support-Effect Bias in Perspective

    NASA Technical Reports Server (NTRS)

    Dungan, Jennifer L.; Brass, Jim (Technical Monitor)

    2001-01-01

    A fundamental strategy in NASA's Earth Observing System's (EOS) monitoring of vegetation and its contribution to the global carbon cycle is to rely on deterministic, process-based ecosystem models to make predictions of carbon flux over large regions. These models are parameterized (that is, the input variables are derived) using remotely sensed images such as those from the Moderate Resolution Imaging Spectroradiometer (MODIS), ground measurements and interpolated maps. Since early applications of these models, investigators have noted that results depend partly on the spatial support of the input variables. In general, the larger the support of the input data, the greater the chance that the effects of important components of the ecosystem will be averaged out. A review of previous work shows that using large supports can cause either positive or negative bias in carbon flux predictions. To put the magnitude and direction of these biases in perspective, we must quantify the range of uncertainty on our best measurements of carbon-related variables made on equivalent areas. In other words, support-effect bias should be placed in the context of prediction uncertainty from other sources. If the range of uncertainty at the smallest support is less than the support-effect bias, more research emphasis should probably be placed on support sizes that are intermediate between those of field measurements and MODIS. If the uncertainty range at the smallest support is larger than the support-effect bias, the accuracy of MODIS-based predictions will be difficult to quantify and more emphasis should be placed on field-scale characterization and sampling. This talk will describe methods to address these issues using a field measurement campaign in North America and "upscaling" using geostatistical estimation and simulation.

  12. Variation in active and passive resource inputs to experimental pools: mechanisms and possible consequences for food webs

    USGS Publications Warehouse

    Kraus, Johanna M.; Pletcher, Leanna T.; Vonesh, James R.

    2010-01-01

    1. Cross-ecosystem movements of resources, including detritus, nutrients and living prey, can strongly influence food web dynamics in recipient habitats. Variation in resource inputs is thought to be driven by factors external to the recipient habitat (e.g. donor habitat productivity and boundary conditions). However, inputs of or by ‘active’ living resources may be strongly influenced by recipient habitat quality when organisms exhibit behavioural habitat selection when crossing ecosystem boundaries. 2. To examine whether behavioural responses to recipient habitat quality alter the relative inputs of ‘active’ living and ‘passive’ detrital resources to recipient food webs, we manipulated the presence of caged predatory fish and measured biomass, energy and organic content of inputs to outdoor experimental pools of adult aquatic insects, frog eggs, terrestrial plant matter and terrestrial arthropods. 3. Caged fish reduced the biomass, energy and organic matter donated to pools by tree frog eggs by ∼70%, but did not alter insect colonisation or passive allochthonous inputs of terrestrial arthropods and plant material. Terrestrial plant matter and adult aquatic insects provided the most energy and organic matter inputs to the pools (40–50%), while terrestrial arthropods provided the least (7%). Inputs of frog egg were relatively small but varied considerably among pools and over time (3%, range = 0–20%). Absolute and proportional amounts varied by input type. 4. Aquatic predators can strongly affect the magnitude of active, but not passive, inputs and that the effect of recipient habitat quality on active inputs is variable. Furthermore, some active inputs (i.e. aquatic insect colonists) can provide similar amounts of energy and organic matter as passive inputs of terrestrial plant matter, which are well known to be important. Because inputs differ in quality and the trophic level they subsidise, proportional changes in input type could have strong effects on recipient food webs. 5. Cross-ecosystem resource inputs have previously been characterised as donor-controlled. However, control by the recipient food web could lead to greater feedback between resource flow and consumer dynamics than has been appreciated so far.

  13. Factors that contribute to physician variability in decisions to limit life support in the ICU: a qualitative study.

    PubMed

    Wilson, Michael E; Rhudy, Lori M; Ballinger, Beth A; Tescher, Ann N; Pickering, Brian W; Gajic, Ognjen

    2013-06-01

    Our aim was to explore reasons for physician variability in decisions to limit life support in the intensive care unit (ICU) utilizing qualitative methodology. Single center study consisting of semi-structured interviews with experienced physicians and nurses. Seventeen intensivists from medical (n = 7), surgical (n = 5), and anesthesia (n = 5) critical care backgrounds, and ten nurses from medical (n = 5) and surgical (n = 5) ICU backgrounds were interviewed. Principles of grounded theory were used to analyze the interview transcripts. Eleven factors within four categories were identified that influenced physician variability in decisions to limit life support: (1) physician work environment-workload and competing priorities, shift changes and handoffs, and incorporation of nursing input; (2) physician experiences-of unexpected patient survival, and of limiting life support in physician's family; (3) physician attitudes-investment in a good surgical outcome, specialty perspective, values and beliefs; and (4) physician relationship with patient and family-hearing the patient's wishes firsthand, engagement in family communication, and family negotiation. We identified several factors which physicians and nurses perceived were important sources of physician variability in decisions to limit life support. Ways to raise awareness and ameliorate the potentially adverse effects of factors such as workload, competing priorities, shift changes, and handoffs should be explored. Exposing intensivists to long term patient outcomes, formalizing nursing input, providing additional training, and emphasizing firsthand knowledge of patient wishes may improve decision making.

  14. MULTIPLIER CIRCUIT

    DOEpatents

    Thomas, R.E.

    1959-01-20

    An electronic circuit is presented for automatically computing the product of two selected variables by multiplying the voltage pulses proportional to the variables. The multiplier circuit has a plurality of parallel resistors of predetermined values connected through separate gate circults between a first input and the output terminal. One voltage pulse is applied to thc flrst input while the second voltage pulse is applied to control circuitry for the respective gate circuits. Thc magnitude of the second voltage pulse selects the resistors upon which the first voltage pulse is imprcssed, whereby the resultant output voltage is proportional to the product of the input voltage pulses

  15. Partitioning the impacts of spatial and climatological rainfall variability in urban drainage modeling

    NASA Astrophysics Data System (ADS)

    Peleg, Nadav; Blumensaat, Frank; Molnar, Peter; Fatichi, Simone; Burlando, Paolo

    2017-03-01

    The performance of urban drainage systems is typically examined using hydrological and hydrodynamic models where rainfall input is uniformly distributed, i.e., derived from a single or very few rain gauges. When models are fed with a single uniformly distributed rainfall realization, the response of the urban drainage system to the rainfall variability remains unexplored. The goal of this study was to understand how climate variability and spatial rainfall variability, jointly or individually considered, affect the response of a calibrated hydrodynamic urban drainage model. A stochastic spatially distributed rainfall generator (STREAP - Space-Time Realizations of Areal Precipitation) was used to simulate many realizations of rainfall for a 30-year period, accounting for both climate variability and spatial rainfall variability. The generated rainfall ensemble was used as input into a calibrated hydrodynamic model (EPA SWMM - the US EPA's Storm Water Management Model) to simulate surface runoff and channel flow in a small urban catchment in the city of Lucerne, Switzerland. The variability of peak flows in response to rainfall of different return periods was evaluated at three different locations in the urban drainage network and partitioned among its sources. The main contribution to the total flow variability was found to originate from the natural climate variability (on average over 74 %). In addition, the relative contribution of the spatial rainfall variability to the total flow variability was found to increase with longer return periods. This suggests that while the use of spatially distributed rainfall data can supply valuable information for sewer network design (typically based on rainfall with return periods from 5 to 15 years), there is a more pronounced relevance when conducting flood risk assessments for larger return periods. The results show the importance of using multiple distributed rainfall realizations in urban hydrology studies to capture the total flow variability in the response of the urban drainage systems to heavy rainfall events.

  16. Input Subject Diversity Accelerates the Growth of Tense and Agreement: Indirect Benefits From a Parent-Implemented Intervention

    PubMed Central

    Rispoli, Matthew; Holt, Janet K.

    2017-01-01

    Purpose This follow-up study examined whether a parent intervention that increased the diversity of lexical noun phrase subjects in parent input and accelerated children's sentence diversity (Hadley et al., 2017) had indirect benefits on tense/agreement (T/A) morphemes in parent input and children's spontaneous speech. Method Differences in input variables related to T/A marking were compared for parents who received toy talk instruction and a quasi-control group: input informativeness and full is declaratives. Language growth on tense agreement productivity (TAP) was modeled for 38 children from language samples obtained at 21, 24, 27, and 30 months. Parent input properties following instruction and children's growth in lexical diversity and sentence diversity were examined as predictors of TAP growth. Results Instruction increased parent use of full is declaratives (ηp 2 ≥ .25) but not input informativeness. Children's sentence diversity was also a significant time-varying predictor of TAP growth. Two input variables, lexical noun phrase subject diversity and full is declaratives, were also significant predictors, even after controlling for children's sentence diversity. Conclusions These findings establish a link between children's sentence diversity and the development of T/A morphemes and provide evidence about characteristics of input that facilitate growth in this grammatical system. PMID:28892819

  17. Uncertainty analysis of the simulations of effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota

    USGS Publications Warehouse

    Wesolowski, Edwin A.

    1996-01-01

    Two separate studies to simulate the effects of discharging treated wastewater to the Red River of the North at Fargo, North Dakota, and Moorhead, Minnesota, have been completed. In the first study, the Red River at Fargo Water-Quality Model was calibrated and verified for icefree conditions. In the second study, the Red River at Fargo Ice-Cover Water-Quality Model was verified for ice-cover conditions.To better understand and apply the Red River at Fargo Water-Quality Model and the Red River at Fargo Ice-Cover Water-Quality Model, the uncertainty associated with simulated constituent concentrations and property values was analyzed and quantified using the Enhanced Stream Water Quality Model-Uncertainty Analysis. The Monte Carlo simulation and first-order error analysis methods were used to analyze the uncertainty in simulated values for six constituents and properties at sites 5, 10, and 14 (upstream to downstream order). The constituents and properties analyzed for uncertainty are specific conductance, total organic nitrogen (reported as nitrogen), total ammonia (reported as nitrogen), total nitrite plus nitrate (reported as nitrogen), 5-day carbonaceous biochemical oxygen demand for ice-cover conditions and ultimate carbonaceous biochemical oxygen demand for ice-free conditions, and dissolved oxygen. Results are given in detail for both the ice-cover and ice-free conditions for specific conductance, total ammonia, and dissolved oxygen.The sensitivity and uncertainty of the simulated constituent concentrations and property values to input variables differ substantially between ice-cover and ice-free conditions. During ice-cover conditions, simulated specific-conductance values are most sensitive to the headwatersource specific-conductance values upstream of site 10 and the point-source specific-conductance values downstream of site 10. These headwater-source and point-source specific-conductance values also are the key sources of uncertainty. Simulated total ammonia concentrations are most sensitive to the point-source total ammonia concentrations at all three sites. Other input variables that contribute substantially to the variability of simulated total ammonia concentrations are the headwater-source total ammonia and the instream reaction coefficient for biological decay of total ammonia to total nitrite. Simulated dissolved-oxygen concentrations at all three sites are most sensitive to headwater-source dissolved-oxygen concentration. This input variable is the key source of variability for simulated dissolved-oxygen concentrations at sites 5 and 10. Headwatersource and point-source dissolved-oxygen concentrations are the key sources of variability for simulated dissolved-oxygen concentrations at site 14.During ice-free conditions, simulated specific-conductance values at all three sites are most sensitive to the headwater-source specific-conductance values. Headwater-source specificconductance values also are the key source of uncertainty. The input variables to which total ammonia and dissolved oxygen are most sensitive vary from site to site and may or may not correspond to the input variables that contribute the most to the variability. The input variables that contribute the most to the variability of simulated total ammonia concentrations are pointsource total ammonia, instream reaction coefficient for biological decay of total ammonia to total nitrite, and Manning's roughness coefficient. The input variables that contribute the most to the variability of simulated dissolved-oxygen concentrations are reaeration rate, sediment oxygen demand rate, and headwater-source algae as chlorophyll a.

  18. Kernel-PCA data integration with enhanced interpretability

    PubMed Central

    2014-01-01

    Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge. PMID:25032747

  19. Exploring objective climate classification for the Himalayan arc and adjacent regions using gridded data sources

    NASA Astrophysics Data System (ADS)

    Forsythe, N.; Blenkinsop, S.; Fowler, H. J.

    2015-05-01

    A three-step climate classification was applied to a spatial domain covering the Himalayan arc and adjacent plains regions using input data from four global meteorological reanalyses. Input variables were selected based on an understanding of the climatic drivers of regional water resource variability and crop yields. Principal component analysis (PCA) of those variables and k-means clustering on the PCA outputs revealed a reanalysis ensemble consensus for eight macro-climate zones. Spatial statistics of input variables for each zone revealed consistent, distinct climatologies. This climate classification approach has potential for enhancing assessment of climatic influences on water resources and food security as well as for characterising the skill and bias of gridded data sets, both meteorological reanalyses and climate models, for reproducing subregional climatologies. Through their spatial descriptors (area, geographic centroid, elevation mean range), climate classifications also provide metrics, beyond simple changes in individual variables, with which to assess the magnitude of projected climate change. Such sophisticated metrics are of particular interest for regions, including mountainous areas, where natural and anthropogenic systems are expected to be sensitive to incremental climate shifts.

  20. Locomotor sensory organization test: a novel paradigm for the assessment of sensory contributions in gait.

    PubMed

    Chien, Jung Hung; Eikema, Diderik-Jan Anthony; Mukherjee, Mukul; Stergiou, Nicholas

    2014-12-01

    Feedback based balance control requires the integration of visual, proprioceptive and vestibular input to detect the body's movement within the environment. When the accuracy of sensory signals is compromised, the system reorganizes the relative contributions through a process of sensory recalibration, for upright postural stability to be maintained. Whereas this process has been studied extensively in standing using the Sensory Organization Test (SOT), less is known about these processes in more dynamic tasks such as locomotion. In the present study, ten healthy young adults performed the six conditions of the traditional SOT to quantify standing postural control when exposed to sensory conflict. The same subjects performed these six conditions using a novel experimental paradigm, the Locomotor SOT (LSOT), to study dynamic postural control during walking under similar types of sensory conflict. To quantify postural control during walking, the net Center of Pressure sway variability was used. This corresponds to the Performance Index of the center of pressure trajectory, which is used to quantify postural control during standing. Our results indicate that dynamic balance control during locomotion in healthy individuals is affected by the systematic manipulation of multisensory inputs. The sway variability patterns observed during locomotion reflect similar balance performance with standing posture, indicating that similar feedback processes may be involved. However, the contribution of visual input is significantly increased during locomotion, compared to standing in similar sensory conflict conditions. The increased visual gain in the LSOT conditions reflects the importance of visual input for the control of locomotion. Since balance perturbations tend to occur in dynamic tasks and in response to environmental constraints not present during the SOT, the LSOT may provide additional information for clinical evaluation on healthy and deficient sensory processing.

  1. Calculating Freshwater Input from Iceberg Melt in Greenlandic Fjords by Combining In Situ Observations of Iceberg Movement with High Resolution Satellite Imagery

    NASA Astrophysics Data System (ADS)

    Sulak, D. J.; Sutherland, D.; Stearns, L. A.; Hamilton, G. S.

    2015-12-01

    Understanding fjord circulation in Greenland's outlet glacial fjords is crucial to explaining recent temporal and spatial variability in glacier dynamics, as well as freshwater transport on the continental shelf. The fjords are commonly assumed to exhibit a plume driven circulation that draws in warmer and saltier Atlantic-origin water toward the glacier at depth. Freshwater input at glacier termini directly drives this circulation and significantly influences water column stratification, which indirectly feeds back on the plume driven circulation. Previous work has focused on freshwater inputs from surface runoff and submarine melting, but the contribution from iceberg melt, a potentially important freshwater source, has not been quantified. Here, we develop a new technique combining in situ observations of movement from iceberg-mounted GPS units with multispectral satellite imagery from Landsat 8. The combination of datasets allows us to examine the details of iceberg movement and quantify mean residence times in a given fjord. We then use common melt rate parameterizations to estimate freshwater input for a given iceberg, utilizing novel satellite-derived iceberg distributions to scale up to a fjord-wide freshwater contribution. We apply this technique to Rink Isbræ and Kangerlussuup Sermia in west Greenland, and Helheim Glacier in southeast Greenland. The analysis can be rapidly expanded to look at other systems as well as seasonal and interannual changes in how icebergs affect the circulation and stratification of Greenland's outlet glacial fjords. Ultimately, this work will lead to a more complete understanding of the wide range of factors that control the observed regional variability in Greenland's glaciers.

  2. Blade loss transient dynamics analysis. Volume 3: User's manual for TETRA program

    NASA Technical Reports Server (NTRS)

    Black, G. R.; Gallardo, V. C.; Storace, A. S.; Sagendorph, F.

    1981-01-01

    The users manual for TETRA contains program logic, flow charts, error messages, input sheets, modeling instructions, option descriptions, input variable descriptions, and demonstration problems. The process of obtaining a NASTRAN 17.5 generated modal input file for TETRA is also described with a worked sample.

  3. CHARACTERISTIC LENGTH SCALE OF INPUT DATA IN DISTRIBUTED MODELS: IMPLICATIONS FOR MODELING GRID SIZE. (R824784)

    EPA Science Inventory

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model resp...

  4. Multiplexer and time duration measuring circuit

    DOEpatents

    Gray, Jr., James

    1980-01-01

    A multiplexer device is provided for multiplexing data in the form of randomly developed, variable width pulses from a plurality of pulse sources to a master storage. The device includes a first multiplexer unit which includes a plurality of input circuits each coupled to one of the pulse sources, with all input circuits being disabled when one input circuit receives an input pulse so that only one input pulse is multiplexed by the multiplexer unit at any one time.

  5. Bayesian Network Structure Learning for Urban Land Use Classification from Landsat ETM+ and Ancillary Data

    NASA Astrophysics Data System (ADS)

    Park, M.; Stenstrom, M. K.

    2004-12-01

    Recognizing urban information from the satellite imagery is problematic due to the diverse features and dynamic changes of urban landuse. The use of Landsat imagery for urban land use classification involves inherent uncertainty due to its spatial resolution and the low separability among land uses. To resolve the uncertainty problem, we investigated the performance of Bayesian networks to classify urban land use since Bayesian networks provide a quantitative way of handling uncertainty and have been successfully used in many areas. In this study, we developed the optimized networks for urban land use classification from Landsat ETM+ images of Marina del Rey area based on USGS land cover/use classification level III. The networks started from a tree structure based on mutual information between variables and added the links to improve accuracy. This methodology offers several advantages: (1) The network structure shows the dependency relationships between variables. The class node value can be predicted even with particular band information missing due to sensor system error. The missing information can be inferred from other dependent bands. (2) The network structure provides information of variables that are important for the classification, which is not available from conventional classification methods such as neural networks and maximum likelihood classification. In our case, for example, bands 1, 5 and 6 are the most important inputs in determining the land use of each pixel. (3) The networks can be reduced with those input variables important for classification. This minimizes the problem without considering all possible variables. We also examined the effect of incorporating ancillary data: geospatial information such as X and Y coordinate values of each pixel and DEM data, and vegetation indices such as NDVI and Tasseled Cap transformation. The results showed that the locational information improved overall accuracy (81%) and kappa coefficient (76%), and lowered the omission and commission errors compared with using only spectral data (accuracy 71%, kappa coefficient 62%). Incorporating DEM data did not significantly improve overall accuracy (74%) and kappa coefficient (66%) but lowered the omission and commission errors. Incorporating NDVI did not much improve the overall accuracy (72%) and k coefficient (65%). Including Tasseled Cap transformation reduced the accuracy (accuracy 70%, kappa 61%). Therefore, additional information from the DEM and vegetation indices was not useful as locational ancillary data.

  6. Prediction of problematic wine fermentations using artificial neural networks.

    PubMed

    Román, R César; Hernández, O Gonzalo; Urtubia, U Alejandra

    2011-11-01

    Artificial neural networks (ANNs) have been used for the recognition of non-linear patterns, a characteristic of bioprocesses like wine production. In this work, ANNs were tested to predict problems of wine fermentation. A database of about 20,000 data from industrial fermentations of Cabernet Sauvignon and 33 variables was used. Two different ways of inputting data into the model were studied, by points and by fermentation. Additionally, different sub-cases were studied by varying the predictor variables (total sugar, alcohol, glycerol, density, organic acids and nitrogen compounds) and the time of fermentation (72, 96 and 256 h). The input of data by fermentations gave better results than the input of data by points. In fact, it was possible to predict 100% of normal and problematic fermentations using three predictor variables: sugars, density and alcohol at 72 h (3 days). Overall, ANNs were capable of obtaining 80% of prediction using only one predictor variable at 72 h; however, it is recommended to add more fermentations to confirm this promising result.

  7. Applying an intelligent model and sensitivity analysis to inspect mass transfer kinetics, shrinkage and crust color changes of deep-fat fried ostrich meat cubes.

    PubMed

    Amiryousefi, Mohammad Reza; Mohebbi, Mohebbat; Khodaiyan, Faramarz

    2014-01-01

    The objectives of this study were to use image analysis and artificial neural network (ANN) to predict mass transfer kinetics as well as color changes and shrinkage of deep-fat fried ostrich meat cubes. Two generalized feedforward networks were separately developed by using the operation conditions as inputs. Results based on the highest numerical quantities of the correlation coefficients between the experimental versus predicted values, showed proper fitting. Sensitivity analysis results of selected ANNs showed that among the input variables, frying temperature was the most sensitive to moisture content (MC) and fat content (FC) compared to other variables. Sensitivity analysis results of selected ANNs showed that MC and FC were the most sensitive to frying temperature compared to other input variables. Similarly, for the second ANN architecture, microwave power density was the most impressive variable having the maximum influence on both shrinkage percentage and color changes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Measurement of Low Carbon Economy Efficiency with a Three-Stage Data Envelopment Analysis: A Comparison of the Largest Twenty CO2 Emitting Countries

    PubMed Central

    Liu, Xiang; Liu, Jia

    2016-01-01

    This paper employs a three-stage approach to estimate low carbon economy efficiency in the largest twenty CO2 emitting countries from 2000 to 2012. The approach includes the following three stages: (1) use of a data envelopment analysis (DEA) model with undesirable output to estimate the low carbon economy efficiency and calculate the input and output slacks; (2) use of a stochastic frontier approach to eliminate the impacts of external environment variables on these slacks; (3) re-estimation of the efficiency with adjusted inputs and outputs to reflect the capacity of the government to develop a low carbon economy. The results indicate that the low carbon economy efficiency performances in these countries had worsened during the studied period. The performances in the third stage are larger than that in the first stage. Moreover, in general, low carbon economy efficiency in Annex I countries of the United Nations Framework Convention on Climate Change (UNFCCC) is better than that in Non-Annex I countries. However, the gap of the average efficiency score between Annex I and Non-Annex I countries in the first stage is smaller than that in the third stage. It implies that the external environment variables show greater influence on Non-Annex I countries than that on Annex I countries. These external environment variables should be taken into account in the transnational negotiation of the responsibility of promoting CO2 reductions. Most importantly, the developed countries (mostly in Annex I) should help the developing countries (mostly in Non-Annex I) to reduce carbon emission by opening or expanding the trade, such as encouraging the import and export of the energy-saving and sharing emission reduction technology. PMID:27834890

  9. Measurement of Low Carbon Economy Efficiency with a Three-Stage Data Envelopment Analysis: A Comparison of the Largest Twenty CO₂ Emitting Countries.

    PubMed

    Liu, Xiang; Liu, Jia

    2016-11-09

    This paper employs a three-stage approach to estimate low carbon economy efficiency in the largest twenty CO₂ emitting countries from 2000 to 2012. The approach includes the following three stages: (1) use of a data envelopment analysis (DEA) model with undesirable output to estimate the low carbon economy efficiency and calculate the input and output slacks; (2) use of a stochastic frontier approach to eliminate the impacts of external environment variables on these slacks; (3) re-estimation of the efficiency with adjusted inputs and outputs to reflect the capacity of the government to develop a low carbon economy. The results indicate that the low carbon economy efficiency performances in these countries had worsened during the studied period. The performances in the third stage are larger than that in the first stage. Moreover, in general, low carbon economy efficiency in Annex I countries of the United Nations Framework Convention on Climate Change (UNFCCC) is better than that in Non-Annex I countries. However, the gap of the average efficiency score between Annex I and Non-Annex I countries in the first stage is smaller than that in the third stage. It implies that the external environment variables show greater influence on Non-Annex I countries than that on Annex I countries. These external environment variables should be taken into account in the transnational negotiation of the responsibility of promoting CO₂ reductions. Most importantly, the developed countries (mostly in Annex I) should help the developing countries (mostly in Non-Annex I) to reduce carbon emission by opening or expanding the trade, such as encouraging the import and export of the energy-saving and sharing emission reduction technology.

  10. Meta-modeling of the pesticide fate model MACRO for groundwater exposure assessments using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Stenemo, Fredrik; Lindahl, Anna M. L.; Gärdenäs, Annemieke; Jarvis, Nicholas

    2007-08-01

    Several simple index methods that use easily accessible data have been developed and included in decision-support systems to estimate pesticide leaching across larger areas. However, these methods often lack important process descriptions (e.g. macropore flow), which brings into question their reliability. Descriptions of macropore flow have been included in simulation models, but these are too complex and demanding for spatial applications. To resolve this dilemma, a neural network simulation meta-model of the dual-permeability macropore flow model MACRO was created for pesticide groundwater exposure assessment. The model was parameterized using pedotransfer functions that require as input the clay and sand content of the topsoil and subsoil, and the topsoil organic carbon content. The meta-model also requires the topsoil pesticide half-life and the soil organic carbon sorption coefficient as input. A fully connected feed-forward multilayer perceptron classification network with two hidden layers, linked to fully connected feed-forward multilayer perceptron neural networks with one hidden layer, trained on sub-sets of the target variable, was shown to be a suitable meta-model for the intended purpose. A Fourier amplitude sensitivity test showed that the model output (the 80th percentile average yearly pesticide concentration at 1 m depth for a 20 year simulation period) was sensitive to all input parameters. The two input parameters related to pesticide characteristics (i.e. soil organic carbon sorption coefficient and topsoil pesticide half-life) were the most influential, but texture in the topsoil was also quite important since it was assumed to control the mass exchange coefficient that regulates the strength of macropore flow. This is in contrast to models based on the advection-dispersion equation where soil texture is relatively unimportant. The use of the meta-model is exemplified with a case-study where the spatial variability of pesticide leaching is mapped for a small field. It was shown that the area of the field that contributes most to leaching depends on the properties of the compound in question. It is concluded that the simulation meta-model of MACRO should prove useful for mapping relative pesticide leaching risks at large scales.

  11. Two-Stage Variable Sample-Rate Conversion System

    NASA Technical Reports Server (NTRS)

    Tkacenko, Andre

    2009-01-01

    A two-stage variable sample-rate conversion (SRC) system has been pro posed as part of a digital signal-processing system in a digital com munication radio receiver that utilizes a variety of data rates. The proposed system would be used as an interface between (1) an analog- todigital converter used in the front end of the receiver to sample an intermediatefrequency signal at a fixed input rate and (2) digita lly implemented tracking loops in subsequent stages that operate at v arious sample rates that are generally lower than the input sample r ate. This Two-Stage System would be capable of converting from an input sample rate to a desired lower output sample rate that could be var iable and not necessarily a rational fraction of the input rate.

  12. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity

    PubMed Central

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2014-01-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes. PMID:22684587

  13. Integrate-and-fire vs Poisson models of LGN input to V1 cortex: noisier inputs reduce orientation selectivity.

    PubMed

    Lin, I-Chun; Xing, Dajun; Shapley, Robert

    2012-12-01

    One of the reasons the visual cortex has attracted the interest of computational neuroscience is that it has well-defined inputs. The lateral geniculate nucleus (LGN) of the thalamus is the source of visual signals to the primary visual cortex (V1). Most large-scale cortical network models approximate the spike trains of LGN neurons as simple Poisson point processes. However, many studies have shown that neurons in the early visual pathway are capable of spiking with high temporal precision and their discharges are not Poisson-like. To gain an understanding of how response variability in the LGN influences the behavior of V1, we study response properties of model V1 neurons that receive purely feedforward inputs from LGN cells modeled either as noisy leaky integrate-and-fire (NLIF) neurons or as inhomogeneous Poisson processes. We first demonstrate that the NLIF model is capable of reproducing many experimentally observed statistical properties of LGN neurons. Then we show that a V1 model in which the LGN input to a V1 neuron is modeled as a group of NLIF neurons produces higher orientation selectivity than the one with Poisson LGN input. The second result implies that statistical characteristics of LGN spike trains are important for V1's function. We conclude that physiologically motivated models of V1 need to include more realistic LGN spike trains that are less noisy than inhomogeneous Poisson processes.

  14. Context effects on second-language learning of tonal contrasts.

    PubMed

    Chang, Charles B; Bowles, Anita R

    2015-12-01

    Studies of lexical tone  learning generally focus on monosyllabic contexts, while reports of phonetic learning benefits associated with input variability are based largely on experienced learners. This study trained inexperienced learners on Mandarin tonal contrasts to test two hypotheses regarding the influence of context and variability on tone  learning. The first hypothesis was that increased phonetic variability of tones in disyllabic contexts makes initial tone  learning more challenging in disyllabic than monosyllabic words. The second hypothesis was that the learnability of a given tone varies across contexts due to differences in tonal variability. Results of a word learning experiment supported both hypotheses: tones were acquired less successfully in disyllables than in monosyllables, and the relative difficulty of disyllables was closely related to contextual tonal variability. These results indicate limited relevance of monosyllable-based data on Mandarin learning for the disyllabic majority of the Mandarin lexicon. Furthermore, in the short term, variability can diminish learning; its effects are not necessarily beneficial but dependent on acquisition stage and other learner characteristics. These findings thus highlight the importance of considering contextual variability and the interaction between variability and type of learner in the design, interpretation, and application of research on phonetic learning.

  15. Improving permafrost distribution modelling using feature selection algorithms

    NASA Astrophysics Data System (ADS)

    Deluigi, Nicola; Lambiel, Christophe; Kanevski, Mikhail

    2016-04-01

    The availability of an increasing number of spatial data on the occurrence of mountain permafrost allows the employment of machine learning (ML) classification algorithms for modelling the distribution of the phenomenon. One of the major problems when dealing with high-dimensional dataset is the number of input features (variables) involved. Application of ML classification algorithms to this large number of variables leads to the risk of overfitting, with the consequence of a poor generalization/prediction. For this reason, applying feature selection (FS) techniques helps simplifying the amount of factors required and improves the knowledge on adopted features and their relation with the studied phenomenon. Moreover, taking away irrelevant or redundant variables from the dataset effectively improves the quality of the ML prediction. This research deals with a comparative analysis of permafrost distribution models supported by FS variable importance assessment. The input dataset (dimension = 20-25, 10 m spatial resolution) was constructed using landcover maps, climate data and DEM derived variables (altitude, aspect, slope, terrain curvature, solar radiation, etc.). It was completed with permafrost evidences (geophysical and thermal data and rock glacier inventories) that serve as training permafrost data. Used FS algorithms informed about variables that appeared less statistically important for permafrost presence/absence. Three different algorithms were compared: Information Gain (IG), Correlation-based Feature Selection (CFS) and Random Forest (RF). IG is a filter technique that evaluates the worth of a predictor by measuring the information gain with respect to the permafrost presence/absence. Conversely, CFS is a wrapper technique that evaluates the worth of a subset of predictors by considering the individual predictive ability of each variable along with the degree of redundancy between them. Finally, RF is a ML algorithm that performs FS as part of its overall operation. It operates by constructing a large collection of decorrelated classification trees, and then predicts the permafrost occurrence through a majority vote. With the so-called out-of-bag (OOB) error estimate, the classification of permafrost data can be validated as well as the contribution of each predictor can be assessed. The performances of compared permafrost distribution models (computed on independent testing sets) increased with the application of FS algorithms on the original dataset and irrelevant or redundant variables were removed. As a consequence, the process provided faster and more cost-effective predictors and a better understanding of the underlying structures residing in permafrost data. Our work demonstrates the usefulness of a feature selection step prior to applying a machine learning algorithm. In fact, permafrost predictors could be ranked not only based on their heuristic and subjective importance (expert knowledge), but also based on their statistical relevance in relation of the permafrost distribution.

  16. Innovations in Basic Flight Training for the Indonesian Air Force

    DTIC Science & Technology

    1990-12-01

    microeconomic theory that could approximate the optimum mix of training hours between an aircraft and simulator, and therefore improve cost effectiveness...The microeconomic theory being used is normally employed when showing production with two variable inputs. An example of variable inputs would be labor...NAS Corpus Christi, Texas, Aerodynamics of the T-34C, 1989. 26. Naval Air Training Command, NAS Corpus Christi, Texas, Meteorological Theory Workbook

  17. A Bayesian approach to model structural error and input variability in groundwater modeling

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Lin, Y. F. F.; Liang, F.

    2015-12-01

    Effective water resource management typically relies on numerical models to analyze groundwater flow and solute transport processes. Model structural error (due to simplification and/or misrepresentation of the "true" environmental system) and input forcing variability (which commonly arises since some inputs are uncontrolled or estimated with high uncertainty) are ubiquitous in groundwater models. Calibration that overlooks errors in model structure and input data can lead to biased parameter estimates and compromised predictions. We present a fully Bayesian approach for a complete assessment of uncertainty for spatially distributed groundwater models. The approach explicitly recognizes stochastic input and uses data-driven error models based on nonparametric kernel methods to account for model structural error. We employ exploratory data analysis to assist in specifying informative prior for error models to improve identifiability. The inference is facilitated by an efficient sampling algorithm based on DREAM-ZS and a parameter subspace multiple-try strategy to reduce the required number of forward simulations of the groundwater model. We demonstrate the Bayesian approach through a synthetic case study of surface-ground water interaction under changing pumping conditions. It is found that explicit treatment of errors in model structure and input data (groundwater pumping rate) has substantial impact on the posterior distribution of groundwater model parameters. Using error models reduces predictive bias caused by parameter compensation. In addition, input variability increases parametric and predictive uncertainty. The Bayesian approach allows for a comparison among the contributions from various error sources, which could inform future model improvement and data collection efforts on how to best direct resources towards reducing predictive uncertainty.

  18. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    NASA Astrophysics Data System (ADS)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  19. Rotorcraft Optimization Tools: Incorporating Rotorcraft Design Codes into Multi-Disciplinary Design, Analysis, and Optimization

    NASA Technical Reports Server (NTRS)

    Meyn, Larry A.

    2018-01-01

    One of the goals of NASA's Revolutionary Vertical Lift Technology Project (RVLT) is to provide validated tools for multidisciplinary design, analysis and optimization (MDAO) of vertical lift vehicles. As part of this effort, the software package, RotorCraft Optimization Tools (RCOTOOLS), is being developed to facilitate incorporating key rotorcraft conceptual design codes into optimizations using the OpenMDAO multi-disciplinary optimization framework written in Python. RCOTOOLS, also written in Python, currently supports the incorporation of the NASA Design and Analysis of RotorCraft (NDARC) vehicle sizing tool and the Comprehensive Analytical Model of Rotorcraft Aerodynamics and Dynamics II (CAMRAD II) analysis tool into OpenMDAO-driven optimizations. Both of these tools use detailed, file-based inputs and outputs, so RCOTOOLS provides software wrappers to update input files with new design variable values, execute these codes and then extract specific response variable values from the file outputs. These wrappers are designed to be flexible and easy to use. RCOTOOLS also provides several utilities to aid in optimization model development, including Graphical User Interface (GUI) tools for browsing input and output files in order to identify text strings that are used to identify specific variables as optimization input and response variables. This paper provides an overview of RCOTOOLS and its use

  20. Soft Mixer Assignment in a Hierarchical Generative Model of Natural Scene Statistics

    PubMed Central

    Schwartz, Odelia; Sejnowski, Terrence J.; Dayan, Peter

    2010-01-01

    Gaussian scale mixture models offer a top-down description of signal generation that captures key bottom-up statistical characteristics of filter responses to images. However, the pattern of dependence among the filters for this class of models is prespecified. We propose a novel extension to the gaussian scale mixture model that learns the pattern of dependence from observed inputs and thereby induces a hierarchical representation of these inputs. Specifically, we propose that inputs are generated by gaussian variables (modeling local filter structure), multiplied by a mixer variable that is assigned probabilistically to each input from a set of possible mixers. We demonstrate inference of both components of the generative model, for synthesized data and for different classes of natural images, such as a generic ensemble and faces. For natural images, the mixer variable assignments show invariances resembling those of complex cells in visual cortex; the statistics of the gaussian components of the model are in accord with the outputs of divisive normalization models. We also show how our model helps interrelate a wide range of models of image statistics and cortical processing. PMID:16999575

  1. Variable input observer for state estimation of high-rate dynamics

    NASA Astrophysics Data System (ADS)

    Hong, Jonathan; Cao, Liang; Laflamme, Simon; Dodson, Jacob

    2017-04-01

    High-rate systems operating in the 10 μs to 10 ms timescale are likely to experience damaging effects due to rapid environmental changes (e.g., turbulence, ballistic impact). Some of these systems could benefit from real-time state estimation to enable their full potential. Examples of such systems include blast mitigation strategies, automotive airbag technologies, and hypersonic vehicles. Particular challenges in high-rate state estimation include: 1) complex time varying nonlinearities of system (e.g. noise, uncertainty, and disturbance); 2) rapid environmental changes; 3) requirement of high convergence rate. Here, we propose using a Variable Input Observer (VIO) concept to vary the input space as the event unfolds. When systems experience high-rate dynamics, rapid changes in the system occur. To investigate the VIO's potential, a VIO-based neuro-observer is constructed and studied using experimental data collected from a laboratory impact test. Results demonstrate that the input space is unique to different impact conditions, and that adjusting the input space throughout the dynamic event produces better estimations than using a traditional fixed input space strategy.

  2. Predicting the Fine Particle Fraction of Dry Powder Inhalers Using Artificial Neural Networks.

    PubMed

    Muddle, Joanna; Kirton, Stewart B; Parisini, Irene; Muddle, Andrew; Murnane, Darragh; Ali, Jogoth; Brown, Marc; Page, Clive; Forbes, Ben

    2017-01-01

    Dry powder inhalers are increasingly popular for delivering drugs to the lungs for the treatment of respiratory diseases, but are complex products with multivariate performance determinants. Heuristic product development guided by in vitro aerosol performance testing is a costly and time-consuming process. This study investigated the feasibility of using artificial neural networks (ANNs) to predict fine particle fraction (FPF) based on formulation device variables. Thirty-one ANN architectures were evaluated for their ability to predict experimentally determined FPF for a self-consistent dataset containing salmeterol xinafoate and salbutamol sulfate dry powder inhalers (237 experimental observations). Principal component analysis was used to identify inputs that significantly affected FPF. Orthogonal arrays (OAs) were used to design ANN architectures, optimized using the Taguchi method. The primary OA ANN r 2 values ranged between 0.46 and 0.90 and the secondary OA increased the r 2  values (0.53-0.93). The optimum ANN (9-4-1 architecture, average r 2 0.92 ± 0.02) included active pharmaceutical ingredient, formulation, and device inputs identified by principal component analysis, which reflected the recognized importance and interdependency of these factors for orally inhaled product performance. The Taguchi method was effective at identifying successful architecture with the potential for development as a useful generic inhaler ANN model, although this would require much larger datasets and more variable inputs. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  3. Aeroacoustic Codes for Rotor Harmonic and BVI Noise. CAMRAD.Mod1/HIRES: Methodology and Users' Manual

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.; Brooks, Thomas F.; Burley, Casey L.; Jolly, J. Ralph, Jr.

    1998-01-01

    This document details the methodology and use of the CAMRAD.Mod1/HIRES codes, which were developed at NASA Langley Research Center for the prediction of helicopter harmonic and Blade-Vortex Interaction (BVI) noise. CANMAD.Mod1 is a substantially modified version of the performance/trim/wake code CANMAD. High resolution blade loading is determined in post-processing by HIRES and an associated indicial aerodynamics code. Extensive capabilities of importance to noise prediction accuracy are documented, including a new multi-core tip vortex roll-up wake model, higher harmonic and individual blade control, tunnel and fuselage correction input, diagnostic blade motion input, and interfaces for acoustic and CFD aerodynamics codes. Modifications and new code capabilities are documented with examples. A users' job preparation guide and listings of variables and namelists are given.

  4. A unified classification model for modeling of seismic liquefaction potential of soil based on CPT

    PubMed Central

    Samui, Pijush; Hariharan, R.

    2014-01-01

    The evaluation of liquefaction potential of soil due to an earthquake is an important step in geosciences. This article examines the capability of Minimax Probability Machine (MPM) for the prediction of seismic liquefaction potential of soil based on the Cone Penetration Test (CPT) data. The dataset has been taken from Chi–Chi earthquake. MPM is developed based on the use of hyperplanes. It has been adopted as a classification tool. This article uses two models (MODEL I and MODEL II). MODEL I employs Cone Resistance (qc) and Cyclic Stress Ratio (CSR) as input variables. qc and Peak Ground Acceleration (PGA) have been taken as inputs for MODEL II. The developed MPM gives 100% accuracy. The results show that the developed MPM can predict liquefaction potential of soil based on qc and PGA. PMID:26199749

  5. A unified classification model for modeling of seismic liquefaction potential of soil based on CPT.

    PubMed

    Samui, Pijush; Hariharan, R

    2015-07-01

    The evaluation of liquefaction potential of soil due to an earthquake is an important step in geosciences. This article examines the capability of Minimax Probability Machine (MPM) for the prediction of seismic liquefaction potential of soil based on the Cone Penetration Test (CPT) data. The dataset has been taken from Chi-Chi earthquake. MPM is developed based on the use of hyperplanes. It has been adopted as a classification tool. This article uses two models (MODEL I and MODEL II). MODEL I employs Cone Resistance (q c) and Cyclic Stress Ratio (CSR) as input variables. q c and Peak Ground Acceleration (PGA) have been taken as inputs for MODEL II. The developed MPM gives 100% accuracy. The results show that the developed MPM can predict liquefaction potential of soil based on q c and PGA.

  6. A novel approach for prediction of tacrolimus blood concentration in liver transplantation patients in the intensive care unit through support vector regression.

    PubMed

    Van Looy, Stijn; Verplancke, Thierry; Benoit, Dominique; Hoste, Eric; Van Maele, Georges; De Turck, Filip; Decruyenaere, Johan

    2007-01-01

    Tacrolimus is an important immunosuppressive drug for organ transplantation patients. It has a narrow therapeutic range, toxic side effects, and a blood concentration with wide intra- and interindividual variability. Hence, it is of the utmost importance to monitor tacrolimus blood concentration, thereby ensuring clinical effect and avoiding toxic side effects. Prediction models for tacrolimus blood concentration can improve clinical care by optimizing monitoring of these concentrations, especially in the initial phase after transplantation during intensive care unit (ICU) stay. This is the first study in the ICU in which support vector machines, as a new data modeling technique, are investigated and tested in their prediction capabilities of tacrolimus blood concentration. Linear support vector regression (SVR) and nonlinear radial basis function (RBF) SVR are compared with multiple linear regression (MLR). Tacrolimus blood concentrations, together with 35 other relevant variables from 50 liver transplantation patients, were extracted from our ICU database. This resulted in a dataset of 457 blood samples, on average between 9 and 10 samples per patient, finally resulting in a database of more than 16,000 data values. Nonlinear RBF SVR, linear SVR, and MLR were performed after selection of clinically relevant input variables and model parameters. Differences between observed and predicted tacrolimus blood concentrations were calculated. Prediction accuracy of the three methods was compared after fivefold cross-validation (Friedman test and Wilcoxon signed rank analysis). Linear SVR and nonlinear RBF SVR had mean absolute differences between observed and predicted tacrolimus blood concentrations of 2.31 ng/ml (standard deviation [SD] 2.47) and 2.38 ng/ml (SD 2.49), respectively. MLR had a mean absolute difference of 2.73 ng/ml (SD 3.79). The difference between linear SVR and MLR was statistically significant (p < 0.001). RBF SVR had the advantage of requiring only 2 input variables to perform this prediction in comparison to 15 and 16 variables needed by linear SVR and MLR, respectively. This is an indication of the superior prediction capability of nonlinear SVR. Prediction of tacrolimus blood concentration with linear and nonlinear SVR was excellent, and accuracy was superior in comparison with an MLR model.

  7. Spatio-temporal Variability of Stemflow Volume in a Beech-Yellow Poplar Forest in Relation to Tree Species and Size

    NASA Astrophysics Data System (ADS)

    Levia, D. F.; van Stan, J. T.; Mage, S.; Hauske, P. W.

    2009-05-01

    Stemflow is a localized point input at the base of trees that can account for more than 10% of the incident gross precipitation in deciduous forests. Despite the fact that stemflow has been documented to be of hydropedological importance, affecting soil moisture patterns, soil erosion, soil chemistry, and the distribution of understory vegetation, our current understanding of the temporal variability of stemflow yield is poor. The aim of the present study, conducted in a beech-yellow poplar forest in northeastern Maryland (39°42'N, 75°50'W), was to better understand the temporal and variability of stemflow production from Fagus grandifolia Ehrh. (American beech) and Liriodendron tulipifera L. (yellow poplar) in relation to meteorological conditions and season in order to better assess its importance to canopy-soil interactions. The experimental plot had a stand density of 225 trees/ha, a stand basal area of 36.8 sq. m/ha, a mean dbh of 40.8 cm, and a mean tree height of 27.8 m. The stand leaf area index (LAI) is 5.3. Yellow poplar and beech constitute three- quarters of the stand basal area. Using a high resolution (5 min) sequential stemflow sampling network, consisting of tipping-bucket gauges interfaced with a Campbell CR1000 datalogger, the temporal variability of stemflow yield was examined. Beech produced significantly larger stemflow amounts than yellow poplar. The amount of stemflow produced by individual beech trees in 5 minute intervals reached three liters. Stemflow yield and funneling ratios decreased with increasing rain intensity. Temporal variability of stemflow inputs were affected by the nature of incident gross rainfall, season, tree species, tree size, and bark water storage capacity. Stemflow was greater during the leafless period than full leaf period. Stemflow yield was greater for larger beech trees and smaller yellow poplar trees, owing to differences in bark water storage capacity. The findings of this study indicate that stemflow has a detectable affect on soil moisture patterning and the hydraulic conductivity of forest soils.

  8. Robust integral variable structure controller and pulse-width pulse-frequency modulated input shaper design for flexible spacecraft with mismatched uncertainty/disturbance.

    PubMed

    Hu, Qinglei

    2007-10-01

    This paper presents a dual-stage control system design method for the flexible spacecraft attitude maneuvering control by use of on-off thrusters and active vibration control by input shaper. In this design approach, attitude control system and vibration suppression were designed separately using lower order model. As a stepping stone, an integral variable structure controller with the assumption of knowing the upper bounds of the mismatched lumped perturbation has been designed which ensures exponential convergence of attitude angle and angular velocity in the presence of bounded uncertainty/disturbances. To reconstruct estimates of the system states for use in a full information variable structure control law, an asymptotic variable structure observer is also employed. In addition, the thruster output is modulated in pulse-width pulse-frequency so that the output profile is similar to the continuous control histories. For actively suppressing the induced vibration, the input shaping technique is used to modify the existing command so that less vibration will be caused by the command itself, which only requires information about the vibration frequency and damping of the closed-loop system. The rationale behind this hybrid control scheme is that the integral variable structure controller can achieve good precision pointing, even in the presence of uncertainties/disturbances, whereas the shaped input attenuator is applied to actively suppress the undesirable vibrations excited by the rapid maneuvers. Simulation results for the spacecraft model show precise attitude control and vibration suppression.

  9. A waste characterisation procedure for ADM1 implementation based on degradation kinetics.

    PubMed

    Girault, R; Bridoux, G; Nauleau, F; Poullain, C; Buffet, J; Steyer, J-P; Sadowski, A G; Béline, F

    2012-09-01

    In this study, a procedure accounting for degradation kinetics was developed to split the total COD of a substrate into each input state variable required for Anaerobic Digestion Model n°1. The procedure is based on the combination of batch experimental degradation tests ("anaerobic respirometry") and numerical interpretation of the results obtained (optimisation of the ADM1 input state variable set). The effects of the main operating parameters, such as the substrate to inoculum ratio in batch experiments and the origin of the inoculum, were investigated. Combined with biochemical fractionation of the total COD of substrates, this method enabled determination of an ADM1-consistent input state variable set for each substrate with affordable identifiability. The substrate to inoculum ratio in the batch experiments and the origin of the inoculum influenced input state variables. However, based on results modelled for a CSTR fed with the substrate concerned, these effects were not significant. Indeed, if the optimal ranges of these operational parameters are respected, uncertainty in COD fractionation is mainly limited to temporal variability of the properties of the substrates. As the method is based on kinetics and is easy to implement for a wide range of substrates, it is a very promising way to numerically predict the effect of design parameters on the efficiency of an anaerobic CSTR. This method thus promotes the use of modelling for the design and optimisation of anaerobic processes. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. A Froude-scaled model of a bedrock-alluvial channel reach: 2. Sediment cover

    NASA Astrophysics Data System (ADS)

    Hodge, Rebecca A.; Hoey, Trevor B.

    2016-09-01

    Previous research into sediment cover in bedrock-alluvial channels has focussed on total sediment cover, rather than the spatial distribution of cover within the channel. The latter is important because it determines the bedrock areas that are protected from erosion and the start and end of sediment transport pathways. We use a 1:10 Froude-scaled model of an 18 by 9 m reach of a bedrock-alluvial channel to study the production and erosion of sediment patches and hence the spatial relationships between flow, bed topography, and sediment dynamics. The hydraulic data from this bed are presented in the companion paper. In these experiments specified volumes of sediment were supplied at the upstream edge of the model reach as single inputs, at each of a range of discharges. This sediment formed patches, and once these stabilized, flow was steadily increased to erode the patches. In summary: (1) patches tend to initiate in the lowest areas of the bed, but areas of topographically induced high flow velocity can inhibit patch development; (2) at low sediment inputs the extent of sediment patches is determined by the bed topography and can be insensitive to the exact volume of sediment supplied; and (3) at higher sediment inputs more extensive patches are produced, stabilized by grain-grain and grain-flow interactions and less influenced by the bed topography. Bedrock topography can therefore be an important constraint on sediment patch dynamics, and topographic metrics are required that incorporate its within-reach variability. The magnitude and timing of sediment input events controls reach-scale sediment cover.

  11. Quantifying the importance of spatial resolution and other factors through global sensitivity analysis of a flood inundation model

    NASA Astrophysics Data System (ADS)

    Thomas Steven Savage, James; Pianosi, Francesca; Bates, Paul; Freer, Jim; Wagener, Thorsten

    2016-11-01

    Where high-resolution topographic data are available, modelers are faced with the decision of whether it is better to spend computational resource on resolving topography at finer resolutions or on running more simulations to account for various uncertain input factors (e.g., model parameters). In this paper we apply global sensitivity analysis to explore how influential the choice of spatial resolution is when compared to uncertainties in the Manning's friction coefficient parameters, the inflow hydrograph, and those stemming from the coarsening of topographic data used to produce Digital Elevation Models (DEMs). We apply the hydraulic model LISFLOOD-FP to produce several temporally and spatially variable model outputs that represent different aspects of flood inundation processes, including flood extent, water depth, and time of inundation. We find that the most influential input factor for flood extent predictions changes during the flood event, starting with the inflow hydrograph during the rising limb before switching to the channel friction parameter during peak flood inundation, and finally to the floodplain friction parameter during the drying phase of the flood event. Spatial resolution and uncertainty introduced by resampling topographic data to coarser resolutions are much more important for water depth predictions, which are also sensitive to different input factors spatially and temporally. Our findings indicate that the sensitivity of LISFLOOD-FP predictions is more complex than previously thought. Consequently, the input factors that modelers should prioritize will differ depending on the model output assessed, and the location and time of when and where this output is most relevant.

  12. Probabilistic dose-response modeling: case study using dichloromethane PBPK model results.

    PubMed

    Marino, Dale J; Starr, Thomas B

    2007-12-01

    A revised assessment of dichloromethane (DCM) has recently been reported that examines the influence of human genetic polymorphisms on cancer risks using deterministic PBPK and dose-response modeling in the mouse combined with probabilistic PBPK modeling in humans. This assessment utilized Bayesian techniques to optimize kinetic variables in mice and humans with mean values from posterior distributions used in the deterministic modeling in the mouse. To supplement this research, a case study was undertaken to examine the potential impact of probabilistic rather than deterministic PBPK and dose-response modeling in mice on subsequent unit risk factor (URF) determinations. Four separate PBPK cases were examined based on the exposure regimen of the NTP DCM bioassay. These were (a) Same Mouse (single draw of all PBPK inputs for both treatment groups); (b) Correlated BW-Same Inputs (single draw of all PBPK inputs for both treatment groups except for bodyweights (BWs), which were entered as correlated variables); (c) Correlated BW-Different Inputs (separate draws of all PBPK inputs for both treatment groups except that BWs were entered as correlated variables); and (d) Different Mouse (separate draws of all PBPK inputs for both treatment groups). Monte Carlo PBPK inputs reflect posterior distributions from Bayesian calibration in the mouse that had been previously reported. A minimum of 12,500 PBPK iterations were undertaken, in which dose metrics, i.e., mg DCM metabolized by the GST pathway/L tissue/day for lung and liver were determined. For dose-response modeling, these metrics were combined with NTP tumor incidence data that were randomly selected from binomial distributions. Resultant potency factors (0.1/ED(10)) were coupled with probabilistic PBPK modeling in humans that incorporated genetic polymorphisms to derive URFs. Results show that there was relatively little difference, i.e., <10% in central tendency and upper percentile URFs, regardless of the case evaluated. Independent draws of PBPK inputs resulted in the slightly higher URFs. Results were also comparable to corresponding values from the previously reported deterministic mouse PBPK and dose-response modeling approach that used LED(10)s to derive potency factors. This finding indicated that the adjustment from ED(10) to LED(10) in the deterministic approach for DCM compensated for variability resulting from probabilistic PBPK and dose-response modeling in the mouse. Finally, results show a similar degree of variability in DCM risk estimates from a number of different sources including the current effort even though these estimates were developed using very different techniques. Given the variety of different approaches involved, 95th percentile-to-mean risk estimate ratios of 2.1-4.1 represent reasonable bounds on variability estimates regarding probabilistic assessments of DCM.

  13. The role of updraft velocity in temporal variability of cloud hydrometeor number

    NASA Astrophysics Data System (ADS)

    Sullivan, Sylvia; Nenes, Athanasios; Lee, Dong Min; Oreopoulos, Lazaros

    2016-04-01

    Significant effort has been dedicated to incorporating direct aerosol-cloud links, through parameterization of liquid droplet activation and ice crystal nucleation, within climate models. This significant accomplishment has generated the need for understanding which parameters affecting hydrometer formation drives its variability in coupled climate simulations, as it provides the basis for optimal parameter estimation as well as robust comparison with data, and other models. Sensitivity analysis alone does not address this issue, given that the importance of each parameter for hydrometer formation depends on its variance and sensitivity. To address the above issue, we develop and use a series of attribution metrics defined with adjoint sensitivities to attribute the temporal variability in droplet and crystal number to important aerosol and dynamical parameters. This attribution analysis is done both for the NASA Global Modeling and Assimilation Office Goddard Earth Observing System Model, Version 5 and the National Center for Atmospheric Research Community Atmosphere Model Version 5.1. Within the GEOS simulation, up to 48% of temporal variability in output ice crystal number and 61% in droplet number can be attributed to input updraft velocity fluctuations, while for the CAM simulation, they explain as much as 89% of the ice crystal number variability. This above results suggest that vertical velocity in both model frameworks is seen to be a very important (or dominant) driver of hydrometer variability. Yet, observations of vertical velocity are seldomly available (or used) to evaluate the vertical velocities in simulations; this strikingly contrasts the amount and quality of data available for aerosol-related parameters. Consequentially, there is a strong need for retrievals or measurements of vertical velocity for addressing this important knowledge gap that requires a significant investment and effort by the atmospheric community. The attribution metrics as a tool of understanding for hydrometer variability can be instrumental for understanding the source of differences between models used for aerosol-cloud-climate interaction studies.

  14. Variable Delay Element For Jitter Control In High Speed Data Links

    DOEpatents

    Livolsi, Robert R.

    2002-06-11

    A circuit and method for decreasing the amount of jitter present at the receiver input of high speed data links which uses a driver circuit for input from a high speed data link which comprises a logic circuit having a first section (1) which provides data latches, a second section (2) which provides a circuit generates a pre-destorted output and for compensating for level dependent jitter having an OR function element and a NOR function element each of which is coupled to two inputs and to a variable delay element as an input which provides a bi-modal delay for pulse width pre-distortion, a third section (3) which provides a muxing circuit, and a forth section (4) for clock distribution in the driver circuit. A fifth section is used for logic testing the driver circuit.

  15. Comparison of modelling accuracy with and without exploiting automated optical monitoring information in predicting the treated wastewater quality.

    PubMed

    Tomperi, Jani; Leiviskä, Kauko

    2018-06-01

    Traditionally the modelling in an activated sludge process has been based on solely the process measurements, but as the interest to optically monitor wastewater samples to characterize the floc morphology has increased, in the recent years the results of image analyses have been more frequently utilized to predict the characteristics of wastewater. This study shows that the traditional process measurements or the automated optical monitoring variables by themselves are not capable of developing the best predictive models for the treated wastewater quality in a full-scale wastewater treatment plant, but utilizing these variables together the optimal models, which show the level and changes in the treated wastewater quality, are achieved. By this early warning, process operation can be optimized to avoid environmental damages and economic losses. The study also shows that specific optical monitoring variables are important in modelling a certain quality parameter, regardless of the other input variables available.

  16. Using the Quantile Mapping to improve a weather generator

    NASA Astrophysics Data System (ADS)

    Chen, Y.; Themessl, M.; Gobiet, A.

    2012-04-01

    We developed a weather generator (WG) by using statistical and stochastic methods, among them are quantile mapping (QM), Monte-Carlo, auto-regression, empirical orthogonal function (EOF). One of the important steps in the WG is using QM, through which all the variables, no matter what distribution they originally are, are transformed into normal distributed variables. Therefore, the WG can work on normally distributed variables, which greatly facilitates the treatment of random numbers in the WG. Monte-Carlo and auto-regression are used to generate the realization; EOFs are employed for preserving spatial relationships and the relationships between different meteorological variables. We have established a complete model named WGQM (weather generator and quantile mapping), which can be applied flexibly to generate daily or hourly time series. For example, with 30-year daily (hourly) data and 100-year monthly (daily) data as input, the 100-year daily (hourly) data would be relatively reasonably produced. Some evaluation experiments with WGQM have been carried out in the area of Austria and the evaluation results will be presented.

  17. Regional sensitivities of seasonal snowpack to elevation, aspect, and vegetation cover in western North America

    NASA Astrophysics Data System (ADS)

    Tennant, Christopher J.; Harpold, Adrian A.; Lohse, Kathleen Ann; Godsey, Sarah E.; Crosby, Benjamin T.; Larsen, Laurel G.; Brooks, Paul D.; Van Kirk, Robert W.; Glenn, Nancy F.

    2017-08-01

    In mountains with seasonal snow cover, the effects of climate change on snowpack will be constrained by landscape-vegetation interactions with the atmosphere. Airborne lidar surveys used to estimate snow depth, topography, and vegetation were coupled with reanalysis climate products to quantify these interactions and to highlight potential snowpack sensitivities to climate and vegetation change across the western U.S. at Rocky Mountain (RM), Northern Basin and Range (NBR), and Sierra Nevada (SNV) sites. In forest and shrub areas, elevation captured the greatest amount of variability in snow depth (16-79%) but aspect explained more variability (11-40%) in alpine areas. Aspect was most important at RM sites where incoming shortwave to incoming net radiation (SW:NetR↓) was highest (˜0.5), capturing 17-37% of snow depth variability in forests and 32-37% in shrub areas. Forest vegetation height exhibited negative relationships with snow depth and explained 3-6% of its variability at sites with greater longwave inputs (NBR and SNV). Variability in the importance of physiography suggests differential sensitivities of snowpack to climate and vegetation change. The high SW:NetR↓ and importance of aspect suggests RM sites may be more responsive to decreases in SW:NetR↓ driven by warming or increases in humidity or cloud cover. Reduced canopy-cover could increase snow depths at SNV sites, and NBR and SNV sites are currently more sensitive to shifts from snow to rain. The consistent importance of aspect and elevation indicates that changes in SW:NetR↓ and the elevation of the rain/snow transition zone could have widespread and varied effects on western U.S. snowpacks.

  18. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  19. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  20. A liquid lens switching-based motionless variable fiber-optic delay line

    NASA Astrophysics Data System (ADS)

    Khwaja, Tariq Shamim; Reza, Syed Azer; Sheikh, Mumtaz

    2018-05-01

    We present a Variable Fiber-Optic Delay Line (VFODL) module capable of imparting long variable delays by switching an input optical/RF signal between Single Mode Fiber (SMF) patch cords of different lengths through a pair of Electronically Controlled Tunable Lenses (ECTLs) resulting in a polarization-independent operation. Depending on intended application, the lengths of the SMFs can be chosen accordingly to achieve the desired VFODL operation dynamic range. If so desired, the state of the input signal polarization can be preserved with the use of commercially available polarization-independent ECTLs along with polarization-maintaining SMFs (PM-SMFs), resulting in an output polarization that is identical to the input. An ECTL-based design also improves power consumption and repeatability. The delay switching mechanism is electronically-controlled, involves no bulk moving parts, and can be fully-automated. The VFODL module is compact due to the use of small optical components and SMFs that can be packaged compactly.

  1. Sensitivity analysis and nonlinearity assessment of steam cracking furnace process

    NASA Astrophysics Data System (ADS)

    Rosli, M. N.; Sudibyo, Aziz, N.

    2017-11-01

    In this paper, sensitivity analysis and nonlinearity assessment of cracking furnace process are presented. For the sensitivity analysis, the fractional factorial design method is employed as a method to analyze the effect of input parameters, which consist of four manipulated variables and two disturbance variables, to the output variables and to identify the interaction between each parameter. The result of the factorial design method is used as a screening method to reduce the number of parameters, and subsequently, reducing the complexity of the model. It shows that out of six input parameters, four parameters are significant. After the screening is completed, step test is performed on the significant input parameters to assess the degree of nonlinearity of the system. The result shows that the system is highly nonlinear with respect to changes in an air-to-fuel ratio (AFR) and feed composition.

  2. Nitrogen Flux in Watersheds: The Role of Soil Distributions and Climate in Nitrogen Flux to the Coastal Ecosystems

    NASA Astrophysics Data System (ADS)

    Showers, W. J.; Reyes, M. M.; Genna, B. J.

    2009-12-01

    Quantifying the flux of nitrate from different landscape sources in watersheds is important to understand the increased flux of nitrogen to coastal ecosystems. Recent technological advances in chemical sensor networks has demonstrated that chemical variability in aquatic environments are chronically under-sampled, and that many nutrient monitoring programs with monthly or daily sampling rates are inadequate to characterize the dominate seasonal, daily or semi-diurnal fluxes in watersheds. The RiverNet program has measured the nitrate flux in the Neuse River Basin, NC on a 15 minute interval over the past eight years. Significant diurnal variation has been observed in nitrate concentrations during high and low flow periods associated with waste water treatment plants in urban watersheds that are not present in agricultural watersheds. Discharge and N flux in the basin also has significant inter-annual variations associated with El Nino oscillations modified by the North Atlantic oscillation. Positive JMA and NAO indexes are associated with increased groundwater levels, nutrient fluxes, and estuary fish kills. To understand how climate oscillation affect discharge and nutrient fluxes, we have monitored runoff/drainages and groundwater inputs adjacent to a large waste application field over the past 4 years, and used the nitrate inputs as a tracer. Surface water run off is well correlated to precipitation patterns and is the largest nutrient flux into the river. Groundwater inputs are variable spatially and temporally, and are controlled by geology and groundwater levels. Hydric soil spatial distributions are an excellent predictor of nutrient transport across landscapes, and is related to the distribution of biogeochemical “hotspots” The isotopic composition of oxygen and nitrogen in dissolved nitrate indicate that sources change with discharge state, and that atmospherically deposited nitrogen is only important to river fluxes in forested and urban watersheds. These results also indicate that the contribution of wastewater treatment plants from urban watersheds has been greatly under-estimated in current models. Prediction of future changes in discharge and nutrient flux by the modeling of climate oscillations has important implications for water resources policy and drought management for public policy and utility managers.

  3. Physical and geochemical drivers of CDOM variability near a natural seep site in the Gulf of Mexico

    NASA Astrophysics Data System (ADS)

    Edwards, C. R.; Powers, L.; Medeiros, P. M.

    2016-02-01

    Colored dissolved organic matter (CDOM) on the continental shelf and slope can serve as a marker for fresh water influence, indicate the presence of hydrocarbons, and provide important clues about nutrient content and organic matter cycling. Autonomous underwater vehicles such as gliders allow for subsurface measurement of CDOM fluorescence for weeks to months; these time series may be especially valuable in the northern Gulf of Mexico, where CDOM inputs of both terrestrial and oil and gas sources can be significant. Data from a recent glider deployment near a natural seep site (GC600) on the continental slope over 180km from shore suggest simultaneous influence of Mississippi plume water and hydrocarbon inputs in the upper 200m, with variability in fluorescence at a range of vertical and temporal scales. We will explore patterns in spatial and temporal variability of glider-measured hydrography, dissolved oxygen, and bio-optical data (CDOM, chlorophyll-a, backscatter fluorescence), and use their combination to infer a terrigenous and/or fossil fuel source(s). Taking advantage of a combination of satellite sea surface temperature, ocean color, wind, and data from moored and mobile platforms, we will examine physical controls on transport and vertical mixing of CDOM and the potential role of nonlinear mesoscale eddies, which can trap water in their interior and may transport river- or hydrocarbon-derived CDOM over long distances. The combined data set will be used to consider and potentially constrain the effect of photodegradation and other biogeochemical causes for CDOM fluorescence variability in the upper 200m.

  4. Probabilistic prediction of barrier-island response to hurricanes

    USGS Publications Warehouse

    Plant, Nathaniel G.; Stockdon, Hilary F.

    2012-01-01

    Prediction of barrier-island response to hurricane attack is important for assessing the vulnerability of communities, infrastructure, habitat, and recreational assets to the impacts of storm surge, waves, and erosion. We have demonstrated that a conceptual model intended to make qualitative predictions of the type of beach response to storms (e.g., beach erosion, dune erosion, dune overwash, inundation) can be reformulated in a Bayesian network to make quantitative predictions of the morphologic response. In an application of this approach at Santa Rosa Island, FL, predicted dune-crest elevation changes in response to Hurricane Ivan explained about 20% to 30% of the observed variance. An extended Bayesian network based on the original conceptual model, which included dune elevations, storm surge, and swash, but with the addition of beach and dune widths as input variables, showed improved skill compared to the original model, explaining 70% of dune elevation change variance and about 60% of dune and shoreline position change variance. This probabilistic approach accurately represented prediction uncertainty (measured with the log likelihood ratio), and it outperformed the baseline prediction (i.e., the prior distribution based on the observations). Finally, sensitivity studies demonstrated that degrading the resolution of the Bayesian network or removing data from the calibration process reduced the skill of the predictions by 30% to 40%. The reduction in skill did not change conclusions regarding the relative importance of the input variables, and the extended model's skill always outperformed the original model.

  5. Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant

    DOEpatents

    Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa

    2013-09-17

    System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.

  6. Spatio-Temporal Variability in Accretion and Erosion of Coastal Foredunes in the Netherlands: Regional Climate and Local Topography

    PubMed Central

    Keijsers, Joep G. S.; Poortinga, Ate; Riksen, Michel J. P. M.; Maroulis, Jerry

    2014-01-01

    Depending on the amount of aeolian sediment input and dune erosion, dune size and morphology change over time. Since coastal foredunes play an important role in the Dutch coastal defence, it is important to have good insight in the main factors that control these changes. In this paper the temporal variations in foredune erosion and accretion were studied in relation to proxies for aeolian transport potential and storminess using yearly elevation measurements from 1965 to 2012 for six sections of the Dutch coast. Longshore differences in the relative impacts of erosion and accretion were examined in relation to local beach width. The results show that temporal variability in foredune accretion and erosion is highest in narrow beach sections. Here, dune erosion alternates with accretion, with variability displaying strong correlations with yearly values of storminess (maximum sea levels). In wider beach sections, dune erosion is less frequent, with lower temporal variability and stronger correlations with time series of transport potential. In erosion dominated years, eroded volumes decrease from narrow to wider beaches. When accretion dominates, dune-volume changes are relatively constant alongshore. Dune erosion is therefore suggested to control spatial variability in dune-volume changes. On a scale of decades, the volume of foredunes tends to increase more on wider beaches. However, where widths exceed 200 to 300 m, this trend is no longer observed. PMID:24603812

  7. Spatio-temporal variability in accretion and erosion of coastal foredunes in the Netherlands: regional climate and local topography.

    PubMed

    Keijsers, Joep G S; Poortinga, Ate; Riksen, Michel J P M; Maroulis, Jerry

    2014-01-01

    Depending on the amount of aeolian sediment input and dune erosion, dune size and morphology change over time. Since coastal foredunes play an important role in the Dutch coastal defence, it is important to have good insight in the main factors that control these changes. In this paper the temporal variations in foredune erosion and accretion were studied in relation to proxies for aeolian transport potential and storminess using yearly elevation measurements from 1965 to 2012 for six sections of the Dutch coast. Longshore differences in the relative impacts of erosion and accretion were examined in relation to local beach width. The results show that temporal variability in foredune accretion and erosion is highest in narrow beach sections. Here, dune erosion alternates with accretion, with variability displaying strong correlations with yearly values of storminess (maximum sea levels). In wider beach sections, dune erosion is less frequent, with lower temporal variability and stronger correlations with time series of transport potential. In erosion dominated years, eroded volumes decrease from narrow to wider beaches. When accretion dominates, dune-volume changes are relatively constant alongshore. Dune erosion is therefore suggested to control spatial variability in dune-volume changes. On a scale of decades, the volume of foredunes tends to increase more on wider beaches. However, where widths exceed 200 to 300 m, this trend is no longer observed.

  8. Water supply, demand, and quality indicators for assessing the spatial distribution of water resource vulnerability in the Columbia River Basin

    USGS Publications Warehouse

    Chang, Heejun; Jung, Il-Won; Strecker, Angela L.; Wise, Daniel; Lafrenz, Martin; Shandas, Vivek; ,; Yeakley, Alan; Pan, Yangdong; Johnson, Gunnar; Psaris, Mike

    2013-01-01

    We investigated water resource vulnerability in the US portion of the Columbia River basin (CRB) using multiple indicators representing water supply, water demand, and water quality. Based on the US county scale, spatial analysis was conducted using various biophysical and socio-economic indicators that control water vulnerability. Water supply vulnerability and water demand vulnerability exhibited a similar spatial clustering of hotspots in areas where agricultural lands and variability of precipitation were high but dam storage capacity was low. The hotspots of water quality vulnerability were clustered around the main stem of the Columbia River where major population and agricultural centres are located. This multiple equal weight indicator approach confirmed that different drivers were associated with different vulnerability maps in the sub-basins of the CRB. Water quality variables are more important than water supply and water demand variables in the Willamette River basin, whereas water supply and demand variables are more important than water quality variables in the Upper Snake and Upper Columbia River basins. This result suggests that current water resources management and practices drive much of the vulnerability within the study area. The analysis suggests the need for increased coordination of water management across multiple levels of water governance to reduce water resource vulnerability in the CRB and a potentially different weighting scheme that explicitly takes into account the input of various water stakeholders.

  9. Computer simulation models as tools for identifying research needs: A black duck population model

    USGS Publications Warehouse

    Ringelman, J.K.; Longcore, J.R.

    1980-01-01

    Existing data on the mortality and production rates of the black duck (Anas rubripes) were used to construct a WATFIV computer simulation model. The yearly cycle was divided into 8 phases: hunting, wintering, reproductive, molt, post-molt, and juvenile dispersal mortality, and production from original and renesting attempts. The program computes population changes for sex and age classes during each phase. After completion of a standard simulation run with all variable default values in effect, a sensitivity analysis was conducted by changing each of 50 input variables, 1 at a time, to assess the responsiveness of the model to changes in each variable. Thirteen variables resulted in a substantial change in population level. Adult mortality factors were important during hunting and wintering phases. All production and mortality associated with original nesting attempts were sensitive, as was juvenile dispersal mortality. By identifying those factors which invoke the greatest population change, and providing an indication of the accuracy required in estimating these factors, the model helps to identify those variables which would be most profitable topics for future research.

  10. Biogeochemical typology and temporal variability of lagoon waters in a coral reef ecosystem subject to terrigeneous and anthropogenic inputs (New Caledonia).

    PubMed

    Fichez, R; Chifflet, S; Douillet, P; Gérard, P; Gutierrez, F; Jouon, A; Ouillon, S; Grenz, C

    2010-01-01

    Considering the growing concern about the impact of anthropogenic inputs on coral reefs and coral reef lagoons, surprisingly little attention has been given to the relationship between those inputs and the trophic status of lagoon waters. The present paper describes the distribution of biogeochemical parameters in the coral reef lagoon of New Caledonia where environmental conditions allegedly range from pristine oligotrophic to anthropogenically influenced. The study objectives were to: (i) identify terrigeneous and anthropogenic inputs and propose a typology of lagoon waters, (ii) determine temporal variability of water biogeochemical parameters at time-scales ranging from hours to seasons. Combined ACP-cluster analyses revealed that over the 2000 km(2) lagoon area around the city of Nouméa, "natural" terrigeneous versus oceanic influences affecting all stations only accounted for less than 20% of the spatial variability whereas 60% of that spatial variability could be attributed to significant eutrophication of a limited number of inshore stations. ACP analysis allowed to unambiguously discriminating between the natural trophic enrichment along the offshore-inshore gradient and anthropogenically induced eutrophication. High temporal variability in dissolved inorganic nutrients concentrations strongly hindered their use as indicators of environmental status. Due to longer turn over time, particulate organic material and more specifically chlorophyll a appeared as more reliable nonconservative tracer of trophic status. Results further provided evidence that ENSO occurrences might temporarily lower the trophic status of the New Caledonia lagoon. It is concluded that, due to such high frequency temporal variability, the use of biogeochemical parameters in environmental surveys require adapted sampling strategies, data management and environmental alert methods. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  11. Estimating floodwater depths from flood inundation maps and topography

    USGS Publications Warehouse

    Cohen, Sagy; Brakenridge, G. Robert; Kettner, Albert; Bates, Bradford; Nelson, Jonathan M.; McDonald, Richard R.; Huang, Yu-Fen; Munasinghe, Dinuke; Zhang, Jiaqi

    2018-01-01

    Information on flood inundation extent is important for understanding societal exposure, water storage volumes, flood wave attenuation, future flood hazard, and other variables. A number of organizations now provide flood inundation maps based on satellite remote sensing. These data products can efficiently and accurately provide the areal extent of a flood event, but do not provide floodwater depth, an important attribute for first responders and damage assessment. Here we present a new methodology and a GIS-based tool, the Floodwater Depth Estimation Tool (FwDET), for estimating floodwater depth based solely on an inundation map and a digital elevation model (DEM). We compare the FwDET results against water depth maps derived from hydraulic simulation of two flood events, a large-scale event for which we use medium resolution input layer (10 m) and a small-scale event for which we use a high-resolution (LiDAR; 1 m) input. Further testing is performed for two inundation maps with a number of challenging features that include a narrow valley, a large reservoir, and an urban setting. The results show FwDET can accurately calculate floodwater depth for diverse flooding scenarios but also leads to considerable bias in locations where the inundation extent does not align well with the DEM. In these locations, manual adjustment or higher spatial resolution input is required.

  12. High frequency measurements of reach scale nitrogen uptake in a fourth order river with contrasting hydromorphology and variable water chemistry (Weiße Elster, Germany)

    NASA Astrophysics Data System (ADS)

    Kunz, Julia Vanessa; Hensley, Robert; Brase, Lisa; Borchardt, Dietrich; Rode, Michael

    2017-01-01

    River networks exhibit a globally important capacity to retain and process nitrogen. However direct measurement of in-stream removal in higher order streams and rivers has been extremely limited. The recent advent of automated sensors has allowed high frequency measurements, and the development of new passive methods of quantifying nitrogen uptake which are scalable across river size. Here we extend these methods to higher order streams with anthropogenically elevated nitrogen levels, substantial tributaries, complex input signals, and multiple N species. We use a combination of two station time-series and longitudinal profiling of nitrate to assess differences in nitrogen processing dynamics in a natural versus a channelized impounded reach with WWTP effluent impacted water chemistry. Our results suggest that net mass removal rates of nitrate were markedly higher in the unmodified reach. Additionally, seasonal variations in temperature and insolation affected the relative contribution of assimilatory versus dissimilatory uptake processes, with the latter exhibiting a stronger positive dependence on temperature. From a methodological perspective, we demonstrate that a mass balance approach based on high frequency data can be useful in deriving quantitative uptake estimates, even under dynamic inputs and lateral tributary inflow. However, uncertainty in diffuse groundwater inputs and more importantly the effects of alternative nitrogen species, in this case ammonium, pose considerable challenges to this method.

  13. Simulated rain events on an urban roadway to understand the dynamics of mercury mobilization in stormwater runoff.

    PubMed

    Eckley, Chris S; Branfireun, Brian

    2009-08-01

    This research focuses on mercury (Hg) mobilization in stormwater runoff from an urban roadway. The objectives were to determine: how the transport of surface-derived Hg changes during an event hydrograph; the influence of antecedent dry days on the runoff Hg load; the relationship between total suspended sediments (TSS) and Hg transport, and; the fate of new Hg input in rain and its relative importance to the runoff Hg load. Simulated rain events were used to control variables to elucidate transport processes and a Hg stable isotope was used to trace the fate of Hg inputs in rain. The results showed that Hg concentrations were highest at the beginning of the hydrograph and were predominantly particulate bound (HgP). On average, almost 50% of the total Hg load was transported during the first minutes of runoff, underscoring the importance of the initial runoff on load calculations. Hg accumulated on the road surface during dry periods resulting in the Hg runoff load increasing with antecedent dry days. The Hg concentrations in runoff were significantly correlated with TSS concentrations (mean r(2)=0.94+/-0.09). The results from the isotope experiments showed that the new Hg inputs quickly become associated with the surface particles and that the majority of Hg in runoff is derived from non-event surface-derived sources.

  14. Homeostasis, singularities, and networks.

    PubMed

    Golubitsky, Martin; Stewart, Ian

    2017-01-01

    Homeostasis occurs in a biological or chemical system when some output variable remains approximately constant as an input parameter [Formula: see text] varies over some interval. We discuss two main aspects of homeostasis, both related to the effect of coordinate changes on the input-output map. The first is a reformulation of homeostasis in the context of singularity theory, achieved by replacing 'approximately constant over an interval' by 'zero derivative of the output with respect to the input at a point'. Unfolding theory then classifies all small perturbations of the input-output function. In particular, the 'chair' singularity, which is especially important in applications, is discussed in detail. Its normal form and universal unfolding [Formula: see text] is derived and the region of approximate homeostasis is deduced. The results are motivated by data on thermoregulation in two species of opossum and the spiny rat. We give a formula for finding chair points in mathematical models by implicit differentiation and apply it to a model of lateral inhibition. The second asks when homeostasis is invariant under appropriate coordinate changes. This is false in general, but for network dynamics there is a natural class of coordinate changes: those that preserve the network structure. We characterize those nodes of a given network for which homeostasis is invariant under such changes. This characterization is determined combinatorially by the network topology.

  15. Gsflow-py: An integrated hydrologic model development tool

    NASA Astrophysics Data System (ADS)

    Gardner, M.; Niswonger, R. G.; Morton, C.; Henson, W.; Huntington, J. L.

    2017-12-01

    Integrated hydrologic modeling encompasses a vast number of processes and specifications, variable in time and space, and development of model datasets can be arduous. Model input construction techniques have not been formalized or made easily reproducible. Creating the input files for integrated hydrologic models (IHM) requires complex GIS processing of raster and vector datasets from various sources. Developing stream network topology that is consistent with the model resolution digital elevation model is important for robust simulation of surface water and groundwater exchanges. Distribution of meteorologic parameters over the model domain is difficult in complex terrain at the model resolution scale, but is necessary to drive realistic simulations. Historically, development of input data for IHM models has required extensive GIS and computer programming expertise which has restricted the use of IHMs to research groups with available financial, human, and technical resources. Here we present a series of Python scripts that provide a formalized technique for the parameterization and development of integrated hydrologic model inputs for GSFLOW. With some modifications, this process could be applied to any regular grid hydrologic model. This Python toolkit automates many of the necessary and laborious processes of parameterization, including stream network development and cascade routing, land coverages, and meteorological distribution over the model domain.

  16. Impact Of The Material Variability On The Stamping Process: Numerical And Analytical Analysis

    NASA Astrophysics Data System (ADS)

    Ledoux, Yann; Sergent, Alain; Arrieux, Robert

    2007-05-01

    The finite element simulation is a very useful tool in the deep drawing industry. It is used more particularly for the development and the validation of new stamping tools. It allows to decrease cost and time for the tooling design and set up. But one of the most important difficulties to have a good agreement between the simulation and the real process comes from the definition of the numerical conditions (mesh, punch travel speed, limit conditions,…) and the parameters which model the material behavior. Indeed, in press shop, when the sheet set changes, often a variation of the formed part geometry is observed according to the variability of the material properties between these different sets. This last parameter represents probably one of the main source of process deviation when the process is set up. That's why it is important to study the influence of material data variation on the geometry of a classical stamped part. The chosen geometry is an omega shaped part because of its simplicity and it is representative one in the automotive industry (car body reinforcement). Moreover, it shows important springback deviations. An isotropic behaviour law is assumed. The impact of the statistical deviation of the three law coefficients characterizing the material and the friction coefficient around their nominal values is tested. A Gaussian distribution is supposed and their impact on the geometry variation is studied by FE simulation. An other approach is envisaged consisting in modeling the process variability by a mathematical model and then, in function of the input parameters variability, it is proposed to define an analytical model which leads to find the part geometry variability around the nominal shape. These two approaches allow to predict the process capability as a function of the material parameter variability.

  17. Binary full adder, made of fusion gates, in a subexcitable Belousov-Zhabotinsky system

    NASA Astrophysics Data System (ADS)

    Adamatzky, Andrew

    2015-09-01

    In an excitable thin-layer Belousov-Zhabotinsky (BZ) medium a localized perturbation leads to the formation of omnidirectional target or spiral waves of excitation. A subexcitable BZ medium responds to asymmetric local perturbation by producing traveling localized excitation wave-fragments, distant relatives of dissipative solitons. The size and life span of an excitation wave-fragment depend on the illumination level of the medium. Under the right conditions the wave-fragments conserve their shape and velocity vectors for extended time periods. I interpret the wave-fragments as values of Boolean variables. When two or more wave-fragments collide they annihilate or merge into a new wave-fragment. States of the logic variables, represented by the wave-fragments, are changed in the result of the collision between the wave-fragments. Thus, a logical gate is implemented. Several theoretical designs and experimental laboratory implementations of Boolean logic gates have been proposed in the past but little has been done cascading the gates into binary arithmetical circuits. I propose a unique design of a binary one-bit full adder based on a fusion gate. A fusion gate is a two-input three-output logical device which calculates the conjunction of the input variables and the conjunction of one input variable with the negation of another input variable. The gate is made of three channels: two channels cross each other at an angle, a third channel starts at the junction. The channels contain a BZ medium. When two excitation wave-fragments, traveling towards each other along input channels, collide at the junction they merge into a single wave-front traveling along the third channel. If there is just one wave-front in the input channel, the front continues its propagation undisturbed. I make a one-bit full adder by cascading two fusion gates. I show how to cascade the adder blocks into a many-bit full adder. I evaluate the feasibility of my designs by simulating the evolution of excitation in the gates and adders using the numerical integration of Oregonator equations.

  18. The Relative Importance of Spatial and Local Environmental Factors in Determining Beetle Assemblages in the Inner Mongolia Grassland.

    PubMed

    Yu, Xiao-Dong; Lü, Liang; Wang, Feng-Yan; Luo, Tian-Hong; Zou, Si-Si; Wang, Cheng-Bin; Song, Ting-Ting; Zhou, Hong-Zhang

    2016-01-01

    The aim of this paper is to increase understanding of the relative importance of the input of geographic and local environmental factors on richness and composition of epigaeic steppe beetles (Coleoptera: Carabidae and Tenebrionidae) along a geographic (longitudinal/precipitation) gradient in the Inner Mongolia grassland. Specifically, we evaluate the associations of environmental variables representing climate and environmental heterogeneity with beetle assemblages. Beetles were sampled using pitfall traps at 25 sites scattered across the full geographic extent of the study biome in 2011-2012. We used variance partitioning techniques and multi-model selection based on the Akaike information criterion to assess the relative importance of the spatial and environmental variables on beetle assemblages. Species richness and abundance showed unimodal patterns along the geographic gradient. Together with space, climate variables associated with precipitation, water-energy balance and harshness of climate had strong explanatory power in richness pattern. Abundance pattern showed strongest association with variation in temperature and environmental heterogeneity. Climatic factors associated with temperature and precipitation variables and the interaction between climate with space were able to explain a substantial amount of variation in community structure. In addition, the turnover of species increased significantly as geographic distances increased. We confirmed that spatial and local environmental factors worked together to shape epigaeic beetle communities along the geographic gradient in the Inner Mongolia grassland. Moreover, the climate features, especially precipitation, water-energy balance and temperature, and the interaction between climate with space and environmental heterogeneity appeared to play important roles on controlling richness and abundance, and species compositions of epigaeic beetles.

  19. UWB delay and multiply receiver

    DOEpatents

    Dallum, Gregory E.; Pratt, Garth C.; Haugen, Peter C.; Romero, Carlos E.

    2013-09-10

    An ultra-wideband (UWB) delay and multiply receiver is formed of a receive antenna; a variable gain attenuator connected to the receive antenna; a signal splitter connected to the variable gain attenuator; a multiplier having one input connected to an undelayed signal from the signal splitter and another input connected to a delayed signal from the signal splitter, the delay between the splitter signals being equal to the spacing between pulses from a transmitter whose pulses are being received by the receive antenna; a peak detection circuit connected to the output of the multiplier and connected to the variable gain attenuator to control the variable gain attenuator to maintain a constant amplitude output from the multiplier; and a digital output circuit connected to the output of the multiplier.

  20. Alpha1 LASSO data bundles Lamont, OK

    DOE Data Explorer

    Gustafson, William Jr; Vogelmann, Andrew; Endo, Satoshi; Toto, Tami; Xiao, Heng; Li, Zhijin; Cheng, Xiaoping; Krishna, Bhargavi (ORCID:000000018828528X)

    2016-08-03

    A data bundle is a unified package consisting of LASSO LES input and output, observations, evaluation diagnostics, and model skill scores. LES input includes model configuration information and forcing data. LES output includes profile statistics and full domain fields of cloud and environmental variables. Model evaluation data consists of LES output and ARM observations co-registered on the same grid and sampling frequency. Model performance is quantified by skill scores and diagnostics in terms of cloud and environmental variables.

  1. How sensitive are estimates of carbon fixation in agricultural models to input data?

    PubMed Central

    2012-01-01

    Background Process based vegetation models are central to understand the hydrological and carbon cycle. To achieve useful results at regional to global scales, such models require various input data from a wide range of earth observations. Since the geographical extent of these datasets varies from local to global scale, data quality and validity is of major interest when they are chosen for use. It is important to assess the effect of different input datasets in terms of quality to model outputs. In this article, we reflect on both: the uncertainty in input data and the reliability of model results. For our case study analysis we selected the Marchfeld region in Austria. We used independent meteorological datasets from the Central Institute for Meteorology and Geodynamics and the European Centre for Medium-Range Weather Forecasts (ECMWF). Land cover / land use information was taken from the GLC2000 and the CORINE 2000 products. Results For our case study analysis we selected two different process based models: the Environmental Policy Integrated Climate (EPIC) and the Biosphere Energy Transfer Hydrology (BETHY/DLR) model. Both process models show a congruent pattern to changes in input data. The annual variability of NPP reaches 36% for BETHY/DLR and 39% for EPIC when changing major input datasets. However, EPIC is less sensitive to meteorological input data than BETHY/DLR. The ECMWF maximum temperatures show a systematic pattern. Temperatures above 20°C are overestimated, whereas temperatures below 20°C are underestimated, resulting in an overall underestimation of NPP in both models. Besides, BETHY/DLR is sensitive to the choice and accuracy of the land cover product. Discussion This study shows that the impact of input data uncertainty on modelling results need to be assessed: whenever the models are applied under new conditions, local data should be used for both input and result comparison. PMID:22296931

  2. Presynaptic modulation of tonic and respiratory inputs to cardiovagal motoneurons by substance P.

    PubMed

    Hou, Lili; Tang, Hongtai; Chen, Yonghua; Wang, Lin; Zhou, Xujiao; Rong, Weifang; Wang, Jijiang

    2009-08-11

    Substance P (SP) has been implicated in vagal control of heart rate and cardiac functions, but the mechanisms of SP actions on cardiac vagal activity remain obscure. The present study has investigated the effects of SP on the synaptic inputs of preganglionic cardiovagal motoneurons (CVNs) in brainstem slices of neonatal rat. Whole-cell voltage-clamp recordings were performed on retrogradely labeled CVNs in the nucleus ambiguus. The results show that in thin slices (400 microm thickness) without respiratory-like rhythm, globally applied SP (1 microM) significantly enhanced both the GABAergic and the glycinergic inputs, but had no effect on the glutamatergic inputs, of CVNs. Since inspiratory-related augmentation of the inhibitory inputs of CVNs in individual respiratory cycles is known to play an important role in the genesis of respiratory sinus arrhythmia, the effects of SP on the inhibitory inputs of CVNs were further examined in thick slices (500-800 microm thickness) with respiratory-like rhythm, and SP (1 microM) was focally applied to the CVNs under patch-clamp recording. Focally applied SP caused frequency increases of the GABAergic and the glycinergic inputs both during inspiratory bursts and during inspiratory intervals. However, the inspiratory-related augmentation of the GABAergic and the glycinergic inputs of CVNs, measured by the frequency increases during inspiratory bursts in percentage of the frequency during inspiratory intervals, was significantly decreased by SP. These results suggest that SP inhibits CVNs via enhancement of their inhibitory synaptic inputs, and SP diminishes the respiratory-related fluctuation of cardiac vagal activity in individual respiratory cycles. These results also indicate that SP may play a role in altering the vagal control of the heart in some cardiovascular diseases such as myocardial ischemia and hypertension, since these diseases are characterized by weakened cardiac vagal tone and heart rate variability, and have been found to have increased central release and receptor binding of SP.

  3. Carbon and nitrogen inputs affect soil microbial community structure and function

    NASA Astrophysics Data System (ADS)

    Liu, X. J. A.; Mau, R. L.; Hayer, M.; Finley, B. K.; Schwartz, E.; Dijkstra, P.; Hungate, B. A.

    2016-12-01

    Climate change has been projected to increase energy and nutrient inputs to soils, affecting soil organic matter (SOM) decomposition (priming effect) and microbial communities. However, many important questions remain: how do labile C and/or N inputs affect priming and microbial communities? What is the relationship between them? To address these questions, we applied N (NH4NO3 ; 100 µg N g-1 wk-1), C (13C glucose; 1000 µg C g-1 wk-1), C+N to four different soils for five weeks. We found: 1) N showed no effect, whereas C induced the greatest priming, and C+N had significantly lower priming than C. 2) C and C+N additions increased the relative abundance of actinobacteria, proteobacteria, and firmicutes, but reduced relative abundance of acidobacteria, chloroflexi, verrucomicrobia, planctomycetes, and gemmatimonadetes. 3) Actinobacteria and proteobacteria increased relative abundance over time, but most others decreased over time. 4) substrate additions (N, C, C+N) significantly reduced microbial alpha diversity, which also decreased over time. 5) For beta diversity, C and C+N formed significantly different communities compare to the control and N treatments. Overtime, microbial community structure significantly altered. Four soils have drastically different community structures. These results indicate amounts of substrate C were determinant factors in modulating the rate of SOM decomposition and microbial communities. Variable responses of different microbial communities to labile C and N inputs indicate that complex relationships between priming and microbial functions. In general, we demonstrate that energy inputs can quickly accelerate SOM decomposition whereas extra N input can slow this process, though both had similar microbial community responses.

  4. A Multivariate Analysis of the Early Dropout Process

    ERIC Educational Resources Information Center

    Fiester, Alan R.; Rudestam, Kjell E.

    1975-01-01

    Principal-component factor analyses were performed on patient input (demographic and pretherapy expectations), therapist input (demographic), and patient perspective therapy process variables that significantly differentiated early dropout from nondropout outpatients at two community mental health centers. (Author)

  5. Approach for Input Uncertainty Propagation and Robust Design in CFD Using Sensitivity Derivatives

    NASA Technical Reports Server (NTRS)

    Putko, Michele M.; Taylor, Arthur C., III; Newman, Perry A.; Green, Lawrence L.

    2002-01-01

    An implementation of the approximate statistical moment method for uncertainty propagation and robust optimization for quasi 3-D Euler CFD code is presented. Given uncertainties in statistically independent, random, normally distributed input variables, first- and second-order statistical moment procedures are performed to approximate the uncertainty in the CFD output. Efficient calculation of both first- and second-order sensitivity derivatives is required. In order to assess the validity of the approximations, these moments are compared with statistical moments generated through Monte Carlo simulations. The uncertainties in the CFD input variables are also incorporated into a robust optimization procedure. For this optimization, statistical moments involving first-order sensitivity derivatives appear in the objective function and system constraints. Second-order sensitivity derivatives are used in a gradient-based search to successfully execute a robust optimization. The approximate methods used throughout the analyses are found to be valid when considering robustness about input parameter mean values.

  6. Variable ratio regenerative braking device

    DOEpatents

    Hoppie, Lyle O.

    1981-12-15

    Disclosed is a regenerative braking device (10) for an automotive vehicle. The device includes an energy storage assembly (12) having a plurality of rubber rollers (26, 28) mounted for rotation between an input shaft (36) and an output shaft (42), clutches (38, 46) and brakes (40, 48) associated with each shaft, and a continuously variable transmission (22) connectable to a vehicle drivetrain and to the input and output shafts by the respective clutches. The rubber rollers are torsionally stressed to accumulate energy from the vehicle when the input shaft is clutched to the transmission while the brake on the output shaft is applied, and are torsionally relaxed to deliver energy to the vehicle when the output shaft is clutched to the transmission while the brake on the input shaft is applied. The transmission ratio is varied to control the rate of energy accumulation and delivery for a given rotational speed of the vehicle drivetrain.

  7. Noniterative computation of infimum in H(infinity) optimisation for plants with invariant zeros on the j(omega)-axis

    NASA Technical Reports Server (NTRS)

    Chen, B. M.; Saber, A.

    1993-01-01

    A simple and noniterative procedure for the computation of the exact value of the infimum in the singular H(infinity)-optimization problem is presented, as a continuation of our earlier work. Our problem formulation is general and we do not place any restrictions in the finite and infinite zero structures of the system, and the direct feedthrough terms between the control input and the controlled output variables and between the disturbance input and the measurement output variables. Our method is applicable to a class of singular H(infinity)-optimization problems for which the transfer functions from the control input to the controlled output and from the disturbance input to the measurement output satisfy certain geometric conditions. In particular, the paper extends the result of earlier work by allowing these two transfer functions to have invariant zeros on the j(omega) axis.

  8. Dynamics of vehicles in variable velocity runs over non-homogeneous flexible track and foundation with two point input models

    NASA Astrophysics Data System (ADS)

    Yadav, D.; Upadhyay, H. C.

    1992-07-01

    Vehicles obtain track-induced input through the wheels, which commonly number more than one. Analysis available for the vehicle response in a variable velocity run on a non-homogeneously profiled flexible track supported by compliant inertial foundation is for a linear heave model having a single ground input. This analysis is being extended to two point input models with heave-pitch and heave-roll degrees of freedom. Closed form expressions have been developed for the system response statistics. Results are presented for a railway coach and track/foundation problem, and the performances of heave, heave-pitch and heave-roll models have been compared. The three models are found to agree in describing the track response. However, the vehicle sprung mass behaviour is predicted to be different by these models, indicating the strong effect of coupling on the vehicle vibration.

  9. Electrical Advantages of Dendritic Spines

    PubMed Central

    Gulledge, Allan T.; Carnevale, Nicholas T.; Stuart, Greg J.

    2012-01-01

    Many neurons receive excitatory glutamatergic input almost exclusively onto dendritic spines. In the absence of spines, the amplitudes and kinetics of excitatory postsynaptic potentials (EPSPs) at the site of synaptic input are highly variable and depend on dendritic location. We hypothesized that dendritic spines standardize the local geometry at the site of synaptic input, thereby reducing location-dependent variability of local EPSP properties. We tested this hypothesis using computational models of simplified and morphologically realistic spiny neurons that allow direct comparison of EPSPs generated on spine heads with EPSPs generated on dendritic shafts at the same dendritic locations. In all morphologies tested, spines greatly reduced location-dependent variability of local EPSP amplitude and kinetics, while having minimal impact on EPSPs measured at the soma. Spine-dependent standardization of local EPSP properties persisted across a range of physiologically relevant spine neck resistances, and in models with variable neck resistances. By reducing the variability of local EPSPs, spines standardized synaptic activation of NMDA receptors and voltage-gated calcium channels. Furthermore, spines enhanced activation of NMDA receptors and facilitated the generation of NMDA spikes and axonal action potentials in response to synaptic input. Finally, we show that dynamic regulation of spine neck geometry can preserve local EPSP properties following plasticity-driven changes in synaptic strength, but is inefficient in modifying the amplitude of EPSPs in other cellular compartments. These observations suggest that one function of dendritic spines is to standardize local EPSP properties throughout the dendritic tree, thereby allowing neurons to use similar voltage-sensitive postsynaptic mechanisms at all dendritic locations. PMID:22532875

  10. Predicting language outcomes for children learning AAC: Child and environmental factors

    PubMed Central

    Brady, Nancy C.; Thiemann-Bourque, Kathy; Fleming, Kandace; Matthews, Kris

    2014-01-01

    Purpose To investigate a model of language development for nonverbal preschool age children learning to communicate with AAC. Method Ninety-three preschool children with intellectual disabilities were assessed at Time 1, and 82 of these children were assessed one year later at Time 2. The outcome variable was the number of different words the children produced (with speech, sign or SGD). Children’s intrinsic predictor for language was modeled as a latent variable consisting of cognitive development, comprehension, play, and nonverbal communication complexity. Adult input at school and home, and amount of AAC instruction were proposed mediators of vocabulary acquisition. Results A confirmatory factor analysis revealed that measures converged as a coherent construct and an SEM model indicated that the intrinsic child predictor construct predicted different words children produced. The amount of input received at home but not at school was a significant mediator. Conclusions Our hypothesized model accurately reflected a latent construct of Intrinsic Symbolic Factor (ISF). Children who evidenced higher initial levels of ISF and more adult input at home produced more words one year later. Findings support the need to assess multiple child variables, and suggest interventions directed to the indicators of ISF and input. PMID:23785187

  11. A soft computing based approach using modified selection strategy for feature reduction of medical systems.

    PubMed

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data.

  12. A Soft Computing Based Approach Using Modified Selection Strategy for Feature Reduction of Medical Systems

    PubMed Central

    Zuhtuogullari, Kursat; Allahverdi, Novruz; Arikan, Nihat

    2013-01-01

    The systems consisting high input spaces require high processing times and memory usage. Most of the attribute selection algorithms have the problems of input dimensions limits and information storage problems. These problems are eliminated by means of developed feature reduction software using new modified selection mechanism with middle region solution candidates adding. The hybrid system software is constructed for reducing the input attributes of the systems with large number of input variables. The designed software also supports the roulette wheel selection mechanism. Linear order crossover is used as the recombination operator. In the genetic algorithm based soft computing methods, locking to the local solutions is also a problem which is eliminated by using developed software. Faster and effective results are obtained in the test procedures. Twelve input variables of the urological system have been reduced to the reducts (reduced input attributes) with seven, six, and five elements. It can be seen from the obtained results that the developed software with modified selection has the advantages in the fields of memory allocation, execution time, classification accuracy, sensitivity, and specificity values when compared with the other reduction algorithms by using the urological test data. PMID:23573172

  13. To twist, roll, stroke or poke? A study of input devices for menu navigation in the cockpit.

    PubMed

    Stanton, Neville A; Harvey, Catherine; Plant, Katherine L; Bolton, Luke

    2013-01-01

    Modern interfaces within the aircraft cockpit integrate many flight management system (FMS) functions into a single system. The success of a user's interaction with an interface depends upon the optimisation between the input device, tasks and environment within which the system is used. In this study, four input devices were evaluated using a range of Human Factors methods, in order to assess aspects of usability including task interaction times, error rates, workload, subjective usability and physical discomfort. The performance of the four input devices was compared using a holistic approach and the findings showed that no single input device produced consistently high performance scores across all of the variables evaluated. The touch screen produced the highest number of 'best' scores; however, discomfort ratings for this device were high, suggesting that it is not an ideal solution as both physical and cognitive aspects of performance must be accounted for in design. This study evaluated four input devices for control of a screen-based flight management system. A holistic approach was used to evaluate both cognitive and physical performance. Performance varied across the dependent variables and between the devices; however, the touch screen produced the largest number of 'best' scores.

  14. Interacting with notebook input devices: an analysis of motor performance and users' expertise.

    PubMed

    Sutter, Christine; Ziefle, Martina

    2005-01-01

    In the present study the usability of two different types of notebook input devices was examined. The independent variables were input device (touchpad vs. mini-joystick) and user expertise (expert vs. novice state). There were 30 participants, of whom 15 were touchpad experts and the other 15 were mini-joystick experts. The experimental tasks were a point-click task (Experiment 1) and a point-drag-drop task (Experiment 2). Dependent variables were the time and accuracy of cursor control. To assess carryover effects, we had the participants complete both experiments, using not only the input device for which they were experts but also the device for which they were novices. Results showed the touchpad performance to be clearly superior to mini-joystick performance. Overall, experts showed better performance than did novices. The significant interaction of input device and expertise showed that the use of an unknown device is difficult, but only for touchpad experts, who were remarkably slower and less accurate when using a mini-joystick. Actual and potential applications of this research include an evaluation of current notebook input devices. The outcomes allow ergonomic guidelines to be derived for optimized usage and design of the mini-joystick and touchpad devices.

  15. Kubios HRV--heart rate variability analysis software.

    PubMed

    Tarvainen, Mika P; Niskanen, Juha-Pekka; Lipponen, Jukka A; Ranta-Aho, Perttu O; Karjalainen, Pasi A

    2014-01-01

    Kubios HRV is an advanced and easy to use software for heart rate variability (HRV) analysis. The software supports several input data formats for electrocardiogram (ECG) data and beat-to-beat RR interval data. It includes an adaptive QRS detection algorithm and tools for artifact correction, trend removal and analysis sample selection. The software computes all the commonly used time-domain and frequency-domain HRV parameters and several nonlinear parameters. There are several adjustable analysis settings through which the analysis methods can be optimized for different data. The ECG derived respiratory frequency is also computed, which is important for reliable interpretation of the analysis results. The analysis results can be saved as an ASCII text file (easy to import into MS Excel or SPSS), Matlab MAT-file, or as a PDF report. The software is easy to use through its compact graphical user interface. The software is available free of charge for Windows and Linux operating systems at http://kubios.uef.fi. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  16. Low-noise encoding of active touch by layer 4 in the somatosensory cortex.

    PubMed

    Hires, Samuel Andrew; Gutnisky, Diego A; Yu, Jianing; O'Connor, Daniel H; Svoboda, Karel

    2015-08-06

    Cortical spike trains often appear noisy, with the timing and number of spikes varying across repetitions of stimuli. Spiking variability can arise from internal (behavioral state, unreliable neurons, or chaotic dynamics in neural circuits) and external (uncontrolled behavior or sensory stimuli) sources. The amount of irreducible internal noise in spike trains, an important constraint on models of cortical networks, has been difficult to estimate, since behavior and brain state must be precisely controlled or tracked. We recorded from excitatory barrel cortex neurons in layer 4 during active behavior, where mice control tactile input through learned whisker movements. Touch was the dominant sensorimotor feature, with >70% spikes occurring in millisecond timescale epochs after touch onset. The variance of touch responses was smaller than expected from Poisson processes, often reaching the theoretical minimum. Layer 4 spike trains thus reflect the millisecond-timescale structure of tactile input with little noise.

  17. Computational methods in the development of a knowledge-based system for the prediction of solid catalyst performance.

    PubMed

    Procelewska, Joanna; Galilea, Javier Llamas; Clerc, Frederic; Farrusseng, David; Schüth, Ferdi

    2007-01-01

    The objective of this work is the construction of a correlation between characteristics of heterogeneous catalysts, encoded in a descriptor vector, and their experimentally measured performances in the propene oxidation reaction. In this paper the key issue in the modeling process, namely the selection of adequate input variables, is explored. Several data-driven feature selection strategies were applied in order to obtain an estimate of the differences in variance and information content of various attributes, furthermore to compare their relative importance. Quantitative property activity relationship techniques using probabilistic neural networks have been used for the creation of various semi-empirical models. Finally, a robust classification model, assigning selected attributes of solid compounds as input to an appropriate performance class in the model reaction was obtained. It has been evident that the mathematical support for the primary attributes set proposed by chemists can be highly desirable.

  18. Community temporal variability increases with fluctuating resource availability

    PubMed Central

    Li, Wei; Stevens, M. Henry H.

    2017-01-01

    An increase in the quantity of available resources is known to affect temporal variability of aggregate community properties. However, it is unclear how might fluctuations in resource availability alter community-level temporal variability. Here we conduct a microcosm experiment with laboratory protist community subjected to manipulated resource pulses that vary in intensity, duration and time of supply, and examine the impact of fluctuating resource availability on temporal variability of the recipient community. The results showed that the temporal variation of total protist abundance increased with the magnitude of resource pulses, as protist community receiving infrequent resource pulses (i.e., high-magnitude nutrients per pulse) was relatively more unstable than community receiving multiple resource pulses (i.e., low-magnitude nutrients per pulse), although the same total amounts of nutrients were added to each community. Meanwhile, the timing effect of fluctuating resources did not significantly alter community temporal variability. Further analysis showed that fluctuating resource availability increased community temporal variability by increasing the degree of community-wide species synchrony and decreasing the stabilizing effects of dominant species. Hence, the importance of fluctuating resource availability in influencing community stability and the regulatory mechanisms merit more attention, especially when global ecosystems are experiencing high rates of anthropogenic nutrient inputs. PMID:28345592

  19. Community temporal variability increases with fluctuating resource availability

    NASA Astrophysics Data System (ADS)

    Li, Wei; Stevens, M. Henry H.

    2017-03-01

    An increase in the quantity of available resources is known to affect temporal variability of aggregate community properties. However, it is unclear how might fluctuations in resource availability alter community-level temporal variability. Here we conduct a microcosm experiment with laboratory protist community subjected to manipulated resource pulses that vary in intensity, duration and time of supply, and examine the impact of fluctuating resource availability on temporal variability of the recipient community. The results showed that the temporal variation of total protist abundance increased with the magnitude of resource pulses, as protist community receiving infrequent resource pulses (i.e., high-magnitude nutrients per pulse) was relatively more unstable than community receiving multiple resource pulses (i.e., low-magnitude nutrients per pulse), although the same total amounts of nutrients were added to each community. Meanwhile, the timing effect of fluctuating resources did not significantly alter community temporal variability. Further analysis showed that fluctuating resource availability increased community temporal variability by increasing the degree of community-wide species synchrony and decreasing the stabilizing effects of dominant species. Hence, the importance of fluctuating resource availability in influencing community stability and the regulatory mechanisms merit more attention, especially when global ecosystems are experiencing high rates of anthropogenic nutrient inputs.

  20. Verification of models for ballistic movement time and endpoint variability.

    PubMed

    Lin, Ray F; Drury, Colin G

    2013-01-01

    A hand control movement is composed of several ballistic movements. The time required in performing a ballistic movement and its endpoint variability are two important properties in developing movement models. The purpose of this study was to test potential models for predicting these two properties. Twelve participants conducted ballistic movements of specific amplitudes using a drawing tablet. The measured data of movement time and endpoint variability were then used to verify the models. This study was successful with Hoffmann and Gan's movement time model (Hoffmann, 1981; Gan and Hoffmann 1988) predicting more than 90.7% data variance for 84 individual measurements. A new theoretically developed ballistic movement variability model, proved to be better than Howarth, Beggs, and Bowden's (1971) model, predicting on average 84.8% of stopping-variable error and 88.3% of aiming-variable errors. These two validated models will help build solid theoretical movement models and evaluate input devices. This article provides better models for predicting end accuracy and movement time of ballistic movements that are desirable in rapid aiming tasks, such as keying in numbers on a smart phone. The models allow better design of aiming tasks, for example button sizes on mobile phones for different user populations.

  1. Mitigating climate change through managing constructed-microbial communities in agriculture

    DOE PAGES

    Hamilton, Cyd E.; Bever, James D.; Labbe, Jessy; ...

    2015-10-27

    The importance of increasing crop production while reducing resource inputs and land-use change cannot be overstated especially in light of climate change and a human population growth projected to reach nine billion this century. Here, mutualistic plant microbe interactions offer a novel approach to enhance agricultural productivity while reducing environmental costs. In concert with other novel agronomic technologies and management, plant-microbial mutualisms could help increase crop production and reduce yield losses by improving resistance and/or resilience to edaphic, biologic, and climatic variability from both bottom-up and top-down perspectives.

  2. Mitigating climate change through managing constructed-microbial communities in agriculture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Cyd E.; Bever, James D.; Labbe, Jessy

    The importance of increasing crop production while reducing resource inputs and land-use change cannot be overstated especially in light of climate change and a human population growth projected to reach nine billion this century. Here, mutualistic plant microbe interactions offer a novel approach to enhance agricultural productivity while reducing environmental costs. In concert with other novel agronomic technologies and management, plant-microbial mutualisms could help increase crop production and reduce yield losses by improving resistance and/or resilience to edaphic, biologic, and climatic variability from both bottom-up and top-down perspectives.

  3. Model driven mobile care for patients with type 1 diabetes.

    PubMed

    Skrøvseth, Stein Olav; Arsand, Eirik; Godtliebsen, Fred; Joakimsen, Ragnar M

    2012-01-01

    We gathered a data set from 30 patients with type 1 diabetes by giving the patients a mobile phone application, where they recorded blood glucose measurements, insulin injections, meals, and physical activity. Using these data as a learning data set, we describe a new approach of building a mobile feedback system for these patients based on periodicities, pattern recognition, and scale-space trends. Most patients have important patterns for periodicities and trends, though better resolution of input variables is needed to provide useful feedback using pattern recognition.

  4. Computer modeling and simulation of human movement. Applications in sport and rehabilitation.

    PubMed

    Neptune, R R

    2000-05-01

    Computer modeling and simulation of human movement plays an increasingly important role in sport and rehabilitation, with applications ranging from sport equipment design to understanding pathologic gait. The complex dynamic interactions within the musculoskeletal and neuromuscular systems make analyzing human movement with existing experimental techniques difficult but computer modeling and simulation allows for the identification of these complex interactions and causal relationships between input and output variables. This article provides an overview of computer modeling and simulation and presents an example application in the field of rehabilitation.

  5. Solar and atmospheric forcing on mountain lakes.

    PubMed

    Luoto, Tomi P; Nevalainen, Liisa

    2016-10-01

    We investigated the influence of long-term external forcing on aquatic communities in Alpine lakes. Fossil microcrustacean (Cladocera) and macrobenthos (Chironomidae) community variability in four Austrian high-altitude lakes, determined as ultra-sensitive to climate change, were compared against records of air temperature, North Atlantic Oscillation (NAO) and solar forcing over the past ~400years. Summer temperature variability affected both aquatic invertebrate groups in all study sites. The influence of NAO and solar forcing on aquatic invertebrates was also significant in the lakes except in the less transparent lake known to have remained uniformly cold during the past centuries due to summertime snowmelt input. The results suggest that external forcing plays an important role in these pristine ecosystems through their impacts on limnology of the lakes. Not only does the air temperature variability influence the communities but also larger-scale external factors related to atmospheric circulation patterns and solar activity cause long-term changes in high-altitude aquatic ecosystems, through their connections to hydroclimatic conditions and light environment. These findings are important in the assessment of climate change impacts on aquatic ecosystems and in greater understanding of the consequences of external forcing on lake ontogeny. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Regulation of CO2 Air Sea Fluxes by Sediments in the North Sea

    NASA Astrophysics Data System (ADS)

    Burt, William; Thomas, Helmuth; Hagens, Mathilde; Brenner, Heiko; Pätsch, Johannes; Clargo, Nicola; Salt, Lesley

    2016-04-01

    A multi-tracer approach is applied to assess the impact of boundary fluxes (e.g. benthic input from sediments or lateral inputs from the coastline) on the acid-base buffering capacity, and overall biogeochemistry, of the North Sea. Analyses of both basin-wide observations in the North Sea and transects through tidal basins at the North-Frisian coastline, reveal that surface distributions of the δ13C signature of dissolved inorganic carbon (DIC) are predominantly controlled by a balance between biological production and respiration. In particular, variability in metabolic DIC throughout stations in the well-mixed southern North Sea indicates the presence of an external carbon source, which is traced to the European continental coastline using naturally-occurring radium isotopes (224Ra and 228Ra). 228Ra is also shown to be a highly effective tracer of North Sea total alkalinity (AT) compared to the more conventional use of salinity. Coastal inputs of metabolic DIC and AT are calculated on a basin-wide scale, and ratios of these inputs suggest denitrification as a primary metabolic pathway for their formation. The AT input paralleling the metabolic DIC release prevents a significant decline in pH as compared to aerobic (i.e. unbuffered) release of metabolic DIC. Finally, long-term pH trends mimic those of riverine nitrate loading, highlighting the importance of coastal AT production via denitrification in regulating pH in the southern North Sea.

  7. Symbolic PathFinder: Symbolic Execution of Java Bytecode

    NASA Technical Reports Server (NTRS)

    Pasareanu, Corina S.; Rungta, Neha

    2010-01-01

    Symbolic Pathfinder (SPF) combines symbolic execution with model checking and constraint solving for automated test case generation and error detection in Java programs with unspecified inputs. In this tool, programs are executed on symbolic inputs representing multiple concrete inputs. Values of variables are represented as constraints generated from the analysis of Java bytecode. The constraints are solved using off-the shelf solvers to generate test inputs guaranteed to achieve complex coverage criteria. SPF has been used successfully at NASA, in academia, and in industry.

  8. New features in the design code Tlie

    NASA Astrophysics Data System (ADS)

    van Zeijts, Johannes

    1993-12-01

    We present features recently installed in the arbitrary-order accelerator design code Tlie. The code uses the MAD input language, and implements programmable extensions modeled after the C language that make it a powerful tool in a wide range of applications: from basic beamline design to high precision-high order design and even control room applications. The basic quantities important in accelerator design are easily accessible from inside the control language. Entities like parameters in elements (strength, current), transfer maps (either in Taylor series or in Lie algebraic form), lines, and beams (either as sets of particles or as distributions) are among the type of variables available. These variables can be set, used as arguments in subroutines, or just typed out. The code is easily extensible with new datatypes.

  9. Semi-arid vegetation response to antecedent climate and water balance windows

    USGS Publications Warehouse

    Thoma, David P.; Munson, Seth M.; Irvine, Kathryn M.; Witwicki, Dana L.; Bunting, Erin

    2016-01-01

    Questions Can we improve understanding of vegetation response to water availability on monthly time scales in semi-arid environments using remote sensing methods? What climatic or water balance variables and antecedent windows of time associated with these variables best relate to the condition of vegetation? Can we develop credible near-term forecasts from climate data that can be used to prepare for future climate change effects on vegetation? Location Semi-arid grasslands in Capitol Reef National Park, Utah, USA. Methods We built vegetation response models by relating the normalized difference vegetation index (NDVI) from MODIS imagery in Mar–Nov 2000–2013 to antecedent climate and water balance variables preceding the monthly NDVI observations. We compared how climate and water balance variables explained vegetation greenness and then used a multi-model ensemble of climate and water balance models to forecast monthly NDVI for three holdout years. Results Water balance variables explained vegetation greenness to a greater degree than climate variables for most growing season months. Seasonally important variables included measures of antecedent water input and storage in spring, switching to indicators of drought, input or use in summer, followed by antecedent moisture availability in autumn. In spite of similar climates, there was evidence the grazed grassland showed a response to drying conditions 1 mo sooner than the ungrazed grassland. Lead times were generally short early in the growing season and antecedent window durations increased from 3 mo early in the growing season to 1 yr or more as the growing season progressed. Forecast accuracy for three holdout years using a multi-model ensemble of climate and water balance variables outperformed forecasts made with a naïve NDVI climatology. Conclusions We determined the influence of climate and water balance on vegetation at a fine temporal scale, which presents an opportunity to forecast vegetation response with short lead times. This understanding was obtained through high-frequency vegetation monitoring using remote sensing, which reduces the costs and time necessary for field measurements and can lead to more rapid detection of vegetation changes that could help managers take appropriate actions.

  10. Delineation of marine ecosystem zones in the northern Arabian Sea during winter

    NASA Astrophysics Data System (ADS)

    Shalin, Saleem; Samuelsen, Annette; Korosov, Anton; Menon, Nandini; Backeberg, Björn C.; Pettersson, Lasse H.

    2018-03-01

    The spatial and temporal variability of marine autotrophic abundance, expressed as chlorophyll concentration, is monitored from space and used to delineate the surface signature of marine ecosystem zones with distinct optical characteristics. An objective zoning method is presented and applied to satellite-derived Chlorophyll a (Chl a) data from the northern Arabian Sea (50-75° E and 15-30° N) during the winter months (November-March). Principal component analysis (PCA) and cluster analysis (CA) were used to statistically delineate the Chl a into zones with similar surface distribution patterns and temporal variability. The PCA identifies principal components of variability and the CA splits these into zones based on similar characteristics. Based on the temporal variability of the Chl a pattern within the study area, the statistical clustering revealed six distinct ecological zones. The obtained zones are related to the Longhurst provinces to evaluate how these compared to established ecological provinces. The Chl a variability within each zone was then compared with the variability of oceanic and atmospheric properties viz. mixed-layer depth (MLD), wind speed, sea-surface temperature (SST), photosynthetically active radiation (PAR), nitrate and dust optical thickness (DOT) as an indication of atmospheric input of iron to the ocean. The analysis showed that in all zones, peak values of Chl a coincided with low SST and deep MLD. The rate of decrease in SST and the deepening of MLD are observed to trigger the algae bloom events in the first four zones. Lagged cross-correlation analysis shows that peak Chl a follows peak MLD and SST minima. The MLD time lag is shorter than the SST lag by 8 days, indicating that the cool surface conditions might have enhanced mixing, leading to increased primary production in the study area. An analysis of monthly climatological nitrate values showed increased concentrations associated with the deepening of the mixed layer. The input of iron seems to be important in both the open-ocean and coastal areas of the northern and north-western parts of the northern Arabian Sea, where the seasonal variability of the Chl a pattern closely follows the variability of iron deposition.

  11. Optimisation of Fabric Reinforced Polymer Composites Using a Variant of Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Axinte, Andrei; Taranu, Nicolae; Bejan, Liliana; Hudisteanu, Iuliana

    2017-12-01

    Fabric reinforced polymeric composites are high performance materials with a rather complex fabric geometry. Therefore, modelling this type of material is a cumbersome task, especially when an efficient use is targeted. One of the most important issue of its design process is the optimisation of the individual laminae and of the laminated structure as a whole. In order to do that, a parametric model of the material has been defined, emphasising the many geometric variables needed to be correlated in the complex process of optimisation. The input parameters involved in this work, include: widths or heights of the tows and the laminate stacking sequence, which are discrete variables, while the gaps between adjacent tows and the height of the neat matrix are continuous variables. This work is one of the first attempts of using a Genetic Algorithm ( GA) to optimise the geometrical parameters of satin reinforced multi-layer composites. Given the mixed type of the input parameters involved, an original software called SOMGA (Satin Optimisation with a Modified Genetic Algorithm) has been conceived and utilised in this work. The main goal is to find the best possible solution to the problem of designing a composite material which is able to withstand to a given set of external, in-plane, loads. The optimisation process has been performed using a fitness function which can analyse and compare mechanical behaviour of different fabric reinforced composites, the results being correlated with the ultimate strains, which demonstrate the efficiency of the composite structure.

  12. Generation of Near-Inertial Currents on the Mid-Atlantic Bight by Hurricane Arthur (2014)

    NASA Astrophysics Data System (ADS)

    Zhang, Fan; Li, Ming; Miles, Travis

    2018-04-01

    Near-inertial currents (NICs) were observed on the Mid-Atlantic Bight (MAB) during the passage of Hurricane Arthur (2014). High-frequency radars showed that the surface currents were weak near the coast but increased in the offshore direction. The NICs were damped out in 3-4 days in the southern MAB but persisted for up to 10 days in the northern MAB. A Slocum glider deployed on the shelf recorded two-layer baroclinic currents oscillating at the inertial frequency. A numerical model was developed to interpret the observed spatial and temporal variabilities of the NICs and their vertical modal structure. Energy budget analysis showed that most of the differences in the NICs between the shelf and the deep ocean were determined by the spatial variations in wind energy input. In the southern MAB, energy dissipation quickly balanced the wind energy input, causing a rapid damping of the NICs. In the northern MAB, however, the dissipation lagged the wind energy input such that the NICs persisted. The model further showed that mode-1 waves dominated throughout the MAB shelf and accounted for over 70% of the current variability in the NICs. Rotary spectrum analyses revealed that the NICs were the largest component of the total kinetic energy except in the southern MAB and the inner shelf regions with strong tides. The NICs were also a major contributor to the shear spectrum over an extensive area of the MAB shelf and thus may play an important role in producing turbulent mixing and cooling of the surface mixed layer.

  13. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System.

    PubMed

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP).

  14. Travel Time Estimation Using Freeway Point Detector Data Based on Evolving Fuzzy Neural Inference System

    PubMed Central

    Tang, Jinjun; Zou, Yajie; Ash, John; Zhang, Shen; Liu, Fang; Wang, Yinhai

    2016-01-01

    Travel time is an important measurement used to evaluate the extent of congestion within road networks. This paper presents a new method to estimate the travel time based on an evolving fuzzy neural inference system. The input variables in the system are traffic flow data (volume, occupancy, and speed) collected from loop detectors located at points both upstream and downstream of a given link, and the output variable is the link travel time. A first order Takagi-Sugeno fuzzy rule set is used to complete the inference. For training the evolving fuzzy neural network (EFNN), two learning processes are proposed: (1) a K-means method is employed to partition input samples into different clusters, and a Gaussian fuzzy membership function is designed for each cluster to measure the membership degree of samples to the cluster centers. As the number of input samples increases, the cluster centers are modified and membership functions are also updated; (2) a weighted recursive least squares estimator is used to optimize the parameters of the linear functions in the Takagi-Sugeno type fuzzy rules. Testing datasets consisting of actual and simulated data are used to test the proposed method. Three common criteria including mean absolute error (MAE), root mean square error (RMSE), and mean absolute relative error (MARE) are utilized to evaluate the estimation performance. Estimation results demonstrate the accuracy and effectiveness of the EFNN method through comparison with existing methods including: multiple linear regression (MLR), instantaneous model (IM), linear model (LM), neural network (NN), and cumulative plots (CP). PMID:26829639

  15. Causes of Interannual Variability over the Southern Hemispheric Tropospheric Ozone Maximum

    NASA Technical Reports Server (NTRS)

    Liu, Junhua; Rodriguez, Jose M.; Steenrod, Stephen D.; Douglass, Anne R.; Logan, Jennifer A.; Olsen, Mark A.; Wargan, Krzysztog; Ziemke, Jerald R.

    2017-01-01

    We examine the relative contribution of processes controlling the interannual variability (IAV) of tropospheric ozone over four sub-regions of the southern hemispheric tropospheric ozone maximum (SHTOM) over a 20-year period. Our study is based on hindcast simulations from the National Aeronautics and Space Administration Global Modeling Initiative chemistry transport model (NASA GMI-CTM) of tropospheric and stratospheric chemistry, driven by assimilated Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields. Our analysis shows that over SHTOM region, the IAV of the stratospheric contribution is the most important factor driving the IAV of upper tropospheric ozone (270 hectopascals), where ozone has a strong radiative effect. Over the South Atlantic region, the contribution from surface emissions to the IAV of ozone exceeds that from stratospheric input at and below 430 hectopascals. Over the South Indian Ocean, the IAV of stratospheric ozone makes the largest contribution to the IAV of ozone with little or no influence from surface emissions at 270 and 430 hectopascals in austral winter. Over the tropical South Atlantic region, the contribution from IAV of stratospheric input dominates in austral winter at 270 hectopascals and drops to less than half but is still significant at 430 hectopascals. Emission contributions are not significant at these two levels. The IAV of lightning over this region also contributes to the IAV of ozone in September and December. Over the tropical southeastern Pacific, the contribution of the IAV of stratospheric input is significant at 270 and 430 hectopascals in austral winter, and emissions have little influence.

  16. Causes of interannual variability over the southern hemispheric tropospheric ozone maximum

    NASA Astrophysics Data System (ADS)

    Liu, Junhua; Rodriguez, Jose M.; Steenrod, Stephen D.; Douglass, Anne R.; Logan, Jennifer A.; Olsen, Mark A.; Wargan, Krzysztof; Ziemke, Jerald R.

    2017-03-01

    We examine the relative contribution of processes controlling the interannual variability (IAV) of tropospheric ozone over four sub-regions of the southern hemispheric tropospheric ozone maximum (SHTOM) over a 20-year period. Our study is based on hindcast simulations from the National Aeronautics and Space Administration Global Modeling Initiative chemistry transport model (NASA GMI-CTM) of tropospheric and stratospheric chemistry, driven by assimilated Modern Era Retrospective Analysis for Research and Applications (MERRA) meteorological fields. Our analysis shows that over SHTOM region, the IAV of the stratospheric contribution is the most important factor driving the IAV of upper tropospheric ozone (270 hPa), where ozone has a strong radiative effect. Over the South Atlantic region, the contribution from surface emissions to the IAV of ozone exceeds that from stratospheric input at and below 430 hPa. Over the South Indian Ocean, the IAV of stratospheric ozone makes the largest contribution to the IAV of ozone with little or no influence from surface emissions at 270 and 430 hPa in austral winter. Over the tropical South Atlantic region, the contribution from IAV of stratospheric input dominates in austral winter at 270 hPa and drops to less than half but is still significant at 430 hPa. Emission contributions are not significant at these two levels. The IAV of lightning over this region also contributes to the IAV of ozone in September and December. Over the tropical southeastern Pacific, the contribution of the IAV of stratospheric input is significant at 270 and 430 hPa in austral winter, and emissions have little influence.

  17. Soft sensor modeling based on variable partition ensemble method for nonlinear batch processes

    NASA Astrophysics Data System (ADS)

    Wang, Li; Chen, Xiangguang; Yang, Kai; Jin, Huaiping

    2017-01-01

    Batch processes are always characterized by nonlinear and system uncertain properties, therefore, the conventional single model may be ill-suited. A local learning strategy soft sensor based on variable partition ensemble method is developed for the quality prediction of nonlinear and non-Gaussian batch processes. A set of input variable sets are obtained by bootstrapping and PMI criterion. Then, multiple local GPR models are developed based on each local input variable set. When a new test data is coming, the posterior probability of each best performance local model is estimated based on Bayesian inference and used to combine these local GPR models to get the final prediction result. The proposed soft sensor is demonstrated by applying to an industrial fed-batch chlortetracycline fermentation process.

  18. Comparison of hybrid spectral-decomposition artificial neural network models for understanding climatic forcing of groundwater levels

    NASA Astrophysics Data System (ADS)

    Abrokwah, K.; O'Reilly, A. M.

    2017-12-01

    Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.

  19. Influence of Coastal Submarine Groundwater Discharges on Seagrass Communities in a Subtropical Karstic Environment.

    PubMed

    Kantún-Manzano, C A; Herrera-Silveira, J A; Arcega-Cabrera, F

    2018-01-01

    The influence of coastal submarine groundwater discharges (SGD) on the distribution and abundance of seagrass meadows was investigated. In 2012, hydrological variability, nutrient variability in sediments and the biotic characteristics of two seagrass beds, one with SGD present and one without, were studied. Findings showed that SGD inputs were related with one dominant seagrass species. To further understand this, a generalized additive model (GAM) was used to explore the relationship between seagrass biomass and environment conditions (water and sediment variables). Salinity range (21-35.5 PSU) was the most influential variable (85%), explaining why H. wrightii was the sole plant species present at the SGD site. At the site without SGD, GAM could not be performed since environmental variables could not explain a total variance of > 60%. This research shows the relevance of monitoring SGD inputs in coastal karstic areas since they significantly affect biotic characteristics of seagrass beds.

  20. Multiple-input multiple-output causal strategies for gene selection.

    PubMed

    Bontempi, Gianluca; Haibe-Kains, Benjamin; Desmedt, Christine; Sotiriou, Christos; Quackenbush, John

    2011-11-25

    Traditional strategies for selecting variables in high dimensional classification problems aim to find sets of maximally relevant variables able to explain the target variations. If these techniques may be effective in generalization accuracy they often do not reveal direct causes. The latter is essentially related to the fact that high correlation (or relevance) does not imply causation. In this study, we show how to efficiently incorporate causal information into gene selection by moving from a single-input single-output to a multiple-input multiple-output setting. We show in synthetic case study that a better prioritization of causal variables can be obtained by considering a relevance score which incorporates a causal term. In addition we show, in a meta-analysis study of six publicly available breast cancer microarray datasets, that the improvement occurs also in terms of accuracy. The biological interpretation of the results confirms the potential of a causal approach to gene selection. Integrating causal information into gene selection algorithms is effective both in terms of prediction accuracy and biological interpretation.

  1. Method and apparatus for smart battery charging including a plurality of controllers each monitoring input variables

    DOEpatents

    Hammerstrom, Donald J.

    2013-10-15

    A method for managing the charging and discharging of batteries wherein at least one battery is connected to a battery charger, the battery charger is connected to a power supply. A plurality of controllers in communication with one and another are provided, each of the controllers monitoring a subset of input variables. A set of charging constraints may then generated for each controller as a function of the subset of input variables. A set of objectives for each controller may also be generated. A preferred charge rate for each controller is generated as a function of either the set of objectives, the charging constraints, or both, using an algorithm that accounts for each of the preferred charge rates for each of the controllers and/or that does not violate any of the charging constraints. A current flow between the battery and the battery charger is then provided at the actual charge rate.

  2. Internal versus external controls on age variability: Definitions, origins and implications in a changing climate

    NASA Astrophysics Data System (ADS)

    Helton, A. M.; Poole, G. C.; Payn, R. A.; Izurieta, C.; Wright, M.; Bernhardt, E. S.; Stanford, J. A.

    2014-12-01

    The unsteadiness of stream water age is now well established, but the controls on the age dynamics, and the adequate representation and prediction of those dynamics, are not. A basic distinction can be made between internal variability that arises from changes in the proportions of flow moving through the diverse flow pathways of a hydrologic system, and external variability that arises from the stochasticity of inputs and outputs (such as precipitation and streamflow). In this talk I will show how these two types of age variability can be formally defined and distinguished within the framework of rank StorAge Selection (rSAS) functions. Internal variability implies variations in time in the rSAS function, while external variability does not. This leads naturally to the definition of several modes of internal variability, reflecting generic ways that system flowpaths may be rearranged. This rearrangement may be induced by fluctuations in the system state (such as catchment wetness), or by longer-term changes in catchment structure (such as land use change). One type of change, the 'inverse storage effect' is characterized by an increase in the release of young water from the system in response to an increase in overall system storage. This effect can be seen in many hydrologic settings, and has important implications for the effect of altered hydroclimatic conditions on solute transport through a landscape. External variability, such as increased precipitation, can induce a decrease in mean transit time (and vice versa), but this effect is greatly enhanced if accompanied by an internal shift in flow pathways that increases the relative importance of younger water. These effects will be illustrated using data from field and experimental studies.

  3. Internal versus external controls on age variability: Definitions, origins and implications in a changing climate

    NASA Astrophysics Data System (ADS)

    Harman, C. J.

    2015-12-01

    The unsteadiness of stream water age is now well established, but the controls on the age dynamics, and the adequate representation and prediction of those dynamics, are not. A basic distinction can be made between internal variability that arises from changes in the proportions of flow moving through the diverse flow pathways of a hydrologic system, and external variability that arises from the stochasticity of inputs and outputs (such as precipitation and streamflow). In this talk I will show how these two types of age variability can be formally defined and distinguished within the framework of rank StorAge Selection (rSAS) functions. Internal variability implies variations in time in the rSAS function, while external variability does not. This leads naturally to the definition of several modes of internal variability, reflecting generic ways that system flowpaths may be rearranged. This rearrangement may be induced by fluctuations in the system state (such as catchment wetness), or by longer-term changes in catchment structure (such as land use change). One type of change, the 'inverse storage effect' is characterized by an increase in the release of young water from the system in response to an increase in overall system storage. This effect can be seen in many hydrologic settings, and has important implications for the effect of altered hydroclimatic conditions on solute transport through a landscape. External variability, such as increased precipitation, can induce a decrease in mean transit time (and vice versa), but this effect is greatly enhanced if accompanied by an internal shift in flow pathways that increases the relative importance of younger water. These effects will be illustrated using data from field and experimental studies.

  4. Rain forest nutrient cycling and productivity in response to large-scale litter manipulation.

    PubMed

    Wood, Tana E; Lawrence, Deborah; Clark, Deborah A; Chazdon, Robin L

    2009-01-01

    Litter-induced pulses of nutrient availability could play an important role in the productivity and nutrient cycling of forested ecosystems, especially tropical forests. Tropical forests experience such pulses as a result of wet-dry seasonality and during major climatic events, such as strong El Niños. We hypothesized that (1) an increase in the quantity and quality of litter inputs would stimulate leaf litter production, woody growth, and leaf litter nutrient cycling, and (2) the timing and magnitude of this response would be influenced by soil fertility and forest age. To test these hypotheses in a Costa Rican wet tropical forest, we established a large-scale litter manipulation experiment in two secondary forest sites and four old-growth forest sites of differing soil fertility. In replicated plots at each site, leaves and twigs (< 2 cm diameter) were removed from a 400-m2 area and added to an adjacent 100-m2 area. This transfer was the equivalent of adding 5-25 kg/ha of organic P to the forest floor. We analyzed leaf litter mass, [N] and [P], and N and P inputs for addition, removal, and control plots over a two-year period. We also evaluated basal area increment of trees in removal and addition plots. There was no response of forest productivity or nutrient cycling to litter removal; however, litter addition significantly increased leaf litter production and N and P inputs 4-5 months following litter application. Litter production increased as much as 92%, and P and N inputs as much as 85% and 156%, respectively. In contrast, litter manipulation had no significant effect on woody growth. The increase in leaf litter production and N and P inputs were significantly positively related to the total P that was applied in litter form. Neither litter treatment nor forest type influenced the temporal pattern of any of the variables measured. Thus, environmental factors such as rainfall drive temporal variability in litter and nutrient inputs, while nutrient release from decomposing litter influences the magnitude. Seasonal or annual variation in leaf litter mass, such as occurs in strong El Niño events, could positively affect leaf litter nutrient cycling and forest productivity, indicating an ability of tropical trees to rapidly respond to increased nutrient availability.

  5. Differences in spike train variability in rat vasopressin and oxytocin neurons and their relationship to synaptic activity

    PubMed Central

    Li, Chunyan; Tripathi, Pradeep K; Armstrong, William E

    2007-01-01

    The firing pattern of magnocellular neurosecretory neurons is intimately related to hormone release, but the relative contribution of synaptic versus intrinsic factors to the temporal dispersion of spikes is unknown. In the present study, we examined the firing patterns of vasopressin (VP) and oxytocin (OT) supraoptic neurons in coronal slices from virgin female rats, with and without blockade of inhibitory and excitatory synaptic currents. Inhibitory postsynaptic currents (IPSCs) were twice as prevalent as their excitatory counterparts (EPSCs), and both were more prevalent in OT compared with VP neurons. Oxytocin neurons fired more slowly and irregularly than VP neurons near threshold. Blockade of Cl− currents (including tonic and synaptic currents) with picrotoxin reduced interspike interval (ISI) variability of continuously firing OT and VP neurons without altering input resistance or firing rate. Blockade of EPSCs did not affect firing pattern. Phasic bursting neurons (putative VP neurons) were inconsistently affected by broad synaptic blockade, suggesting that intrinsic factors may dominate the ISI distribution during this mode in the slice. Specific blockade of synaptic IPSCs with gabazine also reduced ISI variability, but only in OT neurons. In all cases, the effect of inhibitory blockade on firing pattern was independent of any consistent change in input resistance or firing rate. Since the great majority of IPSCs are randomly distributed, miniature events (mIPSCs) in the coronal slice, these findings imply that even mIPSCs can impart irregularity to the firing pattern of OT neurons in particular, and could be important in regulating spike patterning in vivo. For example, the increased firing variability that precedes bursting in OT neurons during lactation could be related to significant changes in synaptic activity. PMID:17332000

  6. "Development of an interactive crop growth web service architecture to review and forecast agricultural sustainability"

    NASA Astrophysics Data System (ADS)

    Seamon, E.; Gessler, P. E.; Flathers, E.; Walden, V. P.

    2014-12-01

    As climate change and weather variability raise issues regarding agricultural production, agricultural sustainability has become an increasingly important component for farmland management (Fisher, 2005, Akinci, 2013). Yet with changes in soil quality, agricultural practices, weather, topography, land use, and hydrology - accurately modeling such agricultural outcomes has proven difficult (Gassman et al, 2007, Williams et al, 1995). This study examined agricultural sustainability and soil health over a heterogeneous multi-watershed area within the Inland Pacific Northwest of the United States (IPNW) - as part of a five year, USDA funded effort to explore the sustainability of cereal production systems (Regional Approaches to Climate Change for Pacific Northwest Agriculture - award #2011-68002-30191). In particular, crop growth and soil erosion were simulated across a spectrum of variables and time periods - using the CropSyst crop growth model (Stockle et al, 2002) and the Water Erosion Protection Project Model (WEPP - Flanagan and Livingston, 1995), respectively. A preliminary range of historical scenarios were run, using a high-resolution, 4km gridded dataset of surface meteorological variables from 1979-2010 (Abatzoglou, 2012). In addition, Coupled Model Inter-comparison Project (CMIP5) global climate model (GCM) outputs were used as input to run crop growth model and erosion future scenarios (Abatzoglou and Brown, 2011). To facilitate our integrated data analysis efforts, an agricultural sustainability web service architecture (THREDDS/Java/Python based) is under development, to allow for the programmatic uploading, sharing and processing of variable input data, running model simulations, as well as downloading and visualizing output results. The results of this study will assist in better understanding agricultural sustainability and erosion relationships in the IPNW, as well as provide a tangible server-based tool for use by researchers and farmers - for both small scale field examination, or more regionalized scenarios.

  7. Spatial-Temporal Heterogeneity in Regional Watershed Phosphorus Cycles Driven by Changes in Human Activity over the Past Century

    NASA Astrophysics Data System (ADS)

    Hale, R. L.; Grimm, N. B.; Vorosmarty, C. J.

    2014-12-01

    An ongoing challenge for society is to harness the benefits of phosphorus (P) while minimizing negative effects on downstream ecosystems. To meet this challenge we must understand the controls on the delivery of anthropogenic P from landscapes to downstream ecosystems. We used a model that incorporates P inputs to watersheds, hydrology, and infrastructure (sewers, waste-water treatment plants, and reservoirs) to reconstruct historic P yields for the northeastern U.S. from 1930 to 2002. At the regional scale, increases in P inputs were paralleled by increased fractional retention, thus P loading to the coast did not increase significantly. We found that temporal variation in regional P yield was correlated with P inputs. Spatial patterns of watershed P yields were best predicted by inputs, but the correlation between inputs and yields in space weakened over time, due to infrastructure development. Although the magnitude of infrastructure effect was small, its role changed over time and was important in creating spatial and temporal heterogeneity in input-yield relationships. We then conducted a hierarchical cluster analysis to identify a typology of anthropogenic P cycling, using data on P inputs (fertilizer, livestock feed, and human food), infrastructure (dams, wastewater treatment plants, sewers), and hydrology (runoff coefficient). We identified 6 key types of watersheds that varied significantly in climate, infrastructure, and the types and amounts of P inputs. Annual watershed P yields and retention varied significantly across watershed types. Although land cover varied significantly across typologies, clusters based on land cover alone did not explain P budget patterns, suggesting that this variable is insufficient to understand patterns of P cycling across large spatial scales. Furthermore, clusters varied over time as patterns of climate, P use, and infrastructure changed. Our results demonstrate that the drivers of P cycles are spatially and temporally heterogeneous, yet they also suggest that a relatively simple typology of watersheds can be useful for understanding regional P cycles and may help inform P management approaches.

  8. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  9. Prognostic Indexes for Brain Metastases: Which Is the Most Powerful?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arruda Viani, Gustavo, E-mail: gusviani@gmail.com; Bernardes da Silva, Lucas Godoi; Stefano, Eduardo Jose

    Purpose: The purpose of the present study was to compare the prognostic indexes (PIs) of patients with brain metastases (BMs) treated with whole brain radiotherapy (WBRT) using an artificial neural network. This analysis is important, because it evaluates the prognostic power of each PI to guide clinical decision-making and outcomes research. Methods and Materials: A retrospective prognostic study was conducted of 412 patients with BMs who underwent WBRT between April 1998 and March 2010. The eligibility criteria for patients included having undergone WBRT or WBRT plus neurosurgery. The data were analyzed using the artificial neural network. The input neural datamore » consisted of all prognostic factors included in the 5 PIs (recursive partitioning analysis, graded prognostic assessment [GPA], basic score for BMs, Rotterdam score, and Germany score). The data set was randomly divided into 300 training and 112 testing examples for survival prediction. All 5 PIs were compared using our database of 412 patients with BMs. The sensibility of the 5 indexes to predict survival according to their input variables was determined statistically using receiver operating characteristic curves. The importance of each variable from each PI was subsequently evaluated. Results: The overall 1-, 2-, and 3-year survival rate was 22%, 10.2%, and 5.1%, respectively. All classes of PIs were significantly associated with survival (recursive partitioning analysis, P < .0001; GPA, P < .0001; basic score for BMs, P = .002; Rotterdam score, P = .001; and Germany score, P < .0001). Comparing the areas under the curves, the GPA was statistically most sensitive in predicting survival (GPA, 86%; recursive partitioning analysis, 81%; basic score for BMs, 79%; Rotterdam, 73%; and Germany score, 77%; P < .001). Among the variables included in each PI, the performance status and presence of extracranial metastases were the most important factors. Conclusion: A variety of prognostic models describe the survival of patients with BMs to a more or less satisfactory degree. Among the 5 PIs evaluated in the present study, GPA was the most powerful in predicting survival. Additional studies should include emerging biologic prognostic factors to improve the sensibility of these PIs.« less

  10. Water and nitrogen management effects on semiarid sorghum production and soil trace gas flux under future climate.

    PubMed

    Duval, Benjamin D; Ghimire, Rajan; Hartman, Melannie D; Marsalis, Mark A

    2018-01-01

    External inputs to agricultural systems can overcome latent soil and climate constraints on production, while contributing to greenhouse gas emissions from fertilizer and water management inefficiencies. Proper crop selection for a given region can lessen the need for irrigation and timing of N fertilizer application with crop N demand can potentially reduce N2O emissions and increase N use efficiency while reducing residual soil N and N leaching. However, increased variability in precipitation is an expectation of climate change and makes predicting biomass and gas flux responses to management more challenging. We used the DayCent model to test hypotheses about input intensity controls on sorghum (Sorghum bicolor (L.) Moench) productivity and greenhouse gas emissions in the southwestern United States under future climate. Sorghum had been previously parameterized for DayCent, but an inverse-modeling via parameter estimation method significantly improved model validation to field data. Aboveground production and N2O flux were more responsive to N additions than irrigation, but simulations with future climate produced lower values for sorghum than current climate. We found positive interactions between irrigation at increased N application for N2O and CO2 fluxes. Extremes in sorghum production under future climate were a function of biomass accumulation trajectories related to daily soil water and mineral N. Root C inputs correlated with soil organic C pools, but overall soil C declined at the decadal scale under current weather while modest gains were simulated under future weather. Scaling biomass and N2O fluxes by unit N and water input revealed that sorghum can be productive without irrigation, and the effect of irrigating crops is difficult to forecast when precipitation is variable within the growing season. These simulation results demonstrate the importance of understanding sorghum production and greenhouse gas emissions at daily scales when assessing annual and decadal-scale management decisions' effects on aspects of arid and semiarid agroecosystem biogeochemistry.

  11. Analysis of Multiple Precipitation Products and Preliminary Assessment of Their Impact on Global Land Data Assimilation System (GLDAS) Land Surface States

    NASA Technical Reports Server (NTRS)

    Gottschalck, Jon; Meng, Jesse; Rodel, Matt; Houser, paul

    2005-01-01

    Land surface models (LSMs) are computer programs, similar to weather and climate prediction models, which simulate the stocks and fluxes of water (including soil moisture, snow, evaporation, and runoff) and energy (including the temperature of and sensible heat released from the soil) after they arrive on the land surface as precipitation and sunlight. It is not currently possible to measure all of the variables of interest everywhere on Earth with sufficient accuracy and space-time resolution. Hence LSMs have been developed to integrate the available observations with our understanding of the physical processes involved, using powerful computers, in order to map these stocks and fluxes as they change in time. The maps are used to improve weather forecasts, support water resources and agricultural applications, and study the Earth's water cycle and climate variability. NASA's Global Land Data Assimilation System (GLDAS) project facilitates testing of several different LSMs with a variety of input datasets (e.g., precipitation, plant type). Precipitation is arguably the most important input to LSMs. Many precipitation datasets have been produced using satellite and rain gauge observations and weather forecast models. In this study, seven different global precipitation datasets were evaluated over the United States, where dense rain gauge networks contribute to reliable precipitation maps. We then used the seven datasets as inputs to GLDAS simulations, so that we could diagnose their impacts on output stocks and fluxes of water. In terms of totals, the Climate Prediction Center (CPC) Merged Analysis of Precipitation (CMAP) had the closest agreement with the US rain gauge dataset for all seasons except winter. The CMAP precipitation was also the most closely correlated in time with the rain gauge data during spring, fall, and winter, while the satellitebased estimates performed best in summer. The GLDAS simulations revealed that modeled soil moisture is highly sensitive to precipitation, with differences in spring and summer as large as 45% depending on the choice of precipitation input.

  12. Comparison of work and time estimates by chiropractic physicians with those of medical and osteopathic providers.

    PubMed

    Hess, J A; Mootz, R D

    1999-06-01

    Resource-based relative value scales (RBRVS) have become a standard method for identifying costs and determining reimbursement for physician services. Development of RBRVS systems and methods are reviewed, and the RBRVS concept of physician "work" is defined. Results of work and time inputs from chiropractic physicians are compared with those reported by osteopathic and medical specialties. Last, implications for reimbursement of chiropractic fee services are discussed. Total work, intraservice work, and time inputs for clinical vignettes reported by chiropractic, osteopathic, and medical physicians are compared. Data for chiropractic work and time reports were drawn from a national random sample of chiropractors conducted as part of a 1997 workers' compensation chiropractic fee schedule development project. Medical and osteopathic inputs were drawn from RBRVS research conducted at Harvard University under a federal contract reported in 1990. Both data sets used the same or similar clinical vignettes and similar methods. Comparisons of work and time inputs are made for clinical vignettes to assess whether work reported by chiropractors is of similar magnitude and variability as work reported by other specialties. Chiropractic inputs for vignettes related to evaluation and management services are similar to those reported by medical specialists and osteopathic physicians. The range of variation between chiropractic work input and other specialties is of similar magnitude to that within other specialties. Chiropractors report greater work input for radiologic interpretation and lower work input for manipulation services. Chiropractors seem to perform similar total "work" for evaluation and management services as other specialties. No basis exists for excluding chiropractors from using evaluation and management codes for reimbursement purposes on grounds of dissimilar physician time or work estimates. Greater work input by chiropractors in radiology interpretation may be related to a greater importance placed on findings in care planning. Consistently higher reports for osteopathic work input on manipulation are likely attributable to differences in reference vignettes used in the respective populations. Research with a common reference vignette used for manipulation providers is recommended, as is development of a single generic approach to coding for manipulation services.

  13. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed that Ks exclusion from input variables list caused around 30 percent decrease in PTFs accuracy for all applied procedures. However, it seems that Ks exclusion resulted in more practical PTFs especially in the case of GMDH network applying input variables which are less time consuming than Ks. In general, it is concluded that GMDH provides more accurate and reliable estimates of cumulative infiltration (a non-readily available characteristic of soil) with a minimum set of input variables (2-4 input variables) and can be promising strategy to model soil infiltration combining the advantages of ANN and MLR methodologies.

  14. An Economic Evaluation of Food Safety Education Interventions: Estimates and Critical Data Gaps.

    PubMed

    Zan, Hua; Lambea, Maria; McDowell, Joyce; Scharff, Robert L

    2017-08-01

    The economic evaluation of food safety interventions is an important tool that practitioners and policy makers use to assess the efficacy of their efforts. These evaluations are built on models that are dependent on accurate estimation of numerous input variables. In many cases, however, there is no data available to determine input values and expert opinion is used to generate estimates. This study uses a benefit-cost analysis of the food safety component of the adult Expanded Food and Nutrition Education Program (EFNEP) in Ohio as a vehicle for demonstrating how results based on variable values that are not objectively determined may be sensitive to alternative assumptions. In particular, the focus here is on how reported behavioral change is translated into economic benefits. Current gaps in the literature make it impossible to know with certainty how many people are protected by the education (what are the spillover effects?), the length of time education remains effective, and the level of risk reduction from change in behavior. Based on EFNEP survey data, food safety education led 37.4% of participants to improve their food safety behaviors. Under reasonable default assumptions, benefits from this improvement significantly outweigh costs, yielding a benefit-cost ratio of between 6.2 and 10.0. Incorporation of a sensitivity analysis using alternative estimates yields a greater range of estimates (0.2 to 56.3), which highlights the importance of future research aimed at filling these research gaps. Nevertheless, most reasonable assumptions lead to estimates of benefits that justify their costs.

  15. Simplex-based optimization of numerical and categorical inputs in early bioprocess development: Case studies in HT chromatography.

    PubMed

    Konstantinidis, Spyridon; Titchener-Hooker, Nigel; Velayudhan, Ajoy

    2017-08-01

    Bioprocess development studies often involve the investigation of numerical and categorical inputs via the adoption of Design of Experiments (DoE) techniques. An attractive alternative is the deployment of a grid compatible Simplex variant which has been shown to yield optima rapidly and consistently. In this work, the method is combined with dummy variables and it is deployed in three case studies wherein spaces are comprised of both categorical and numerical inputs, a situation intractable by traditional Simplex methods. The first study employs in silico data and lays out the dummy variable methodology. The latter two employ experimental data from chromatography based studies performed with the filter-plate and miniature column High Throughput (HT) techniques. The solute of interest in the former case study was a monoclonal antibody whereas the latter dealt with the separation of a binary system of model proteins. The implemented approach prevented the stranding of the Simplex method at local optima, due to the arbitrary handling of the categorical inputs, and allowed for the concurrent optimization of numerical and categorical, multilevel and/or dichotomous, inputs. The deployment of the Simplex method, combined with dummy variables, was therefore entirely successful in identifying and characterizing global optima in all three case studies. The Simplex-based method was further shown to be of equivalent efficiency to a DoE-based approach, represented here by D-Optimal designs. Such an approach failed, however, to both capture trends and identify optima, and led to poor operating conditions. It is suggested that the Simplex-variant is suited to development activities involving numerical and categorical inputs in early bioprocess development. © 2017 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  16. Understanding surface-water availability in the Central Valley as a means to projecting future groundwater storage with climate variability

    NASA Astrophysics Data System (ADS)

    Goodrich, J. P.; Cayan, D. R.

    2017-12-01

    California's Central Valley (CV) relies heavily on diverted surface water and groundwater pumping to supply irrigated agriculture. However, understanding the spatiotemporal character of water availability in the CV is difficult because of the number of individual farms and local, state, and federal agencies involved in using and managing water. Here we use the Central Valley Hydrologic Model (CVHM), developed by the USGS, to understand the relationships between climatic variability, surface water inputs, and resulting groundwater use over the historical period 1970-2013. We analyzed monthly surface water diversion data from >500 CV locations. Principle components analyses were applied to drivers constructed from meteorological data, surface reservoir storage, ET, land use cover, and upstream inflows, to feed multiple regressions and identify factors most important in predicting surface water diversions. Two thirds of the diversion locations ( 80% of total diverted water) can be predicted to within 15%. Along with monthly inputs, representations of cumulative precipitation over the previous 3 to 36 months can explain an additional 10% of variance, depending on location, compared to results that excluded this information. Diversions in the southern CV are highly sensitive to inter-annual variability in precipitation (R2 = 0.8), whereby more surface water is used during wet years. Until recently, this was not the case in the northern and mid-CV, where diversions were relatively constant annually, suggesting relative insensitivity to drought. In contrast, this has important implications for drought response in southern regions (eg. Tulare Basin) where extended dry conditions can severely limit surface water supplies and lead to excess groundwater pumping, storage loss, and subsidence. In addition to fueling our understanding of spatiotemporal variability in diversions, our ability to predict these water balance components allows us to update CVHM predictions before surface water data are compiled. We can then develop groundwater pumping and storage predictions in real time, and make them available to water managers. In addition, we are working toward future projections by coupling the regional CVHM to downscaled GCM output to assess future scenarios of water availability in this critical region.

  17. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-02-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modeling. In this paper we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modeling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalization property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally very efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analyzed on two real-world case studies (Marina catchment (Singapore) and Canning River (Western Australia)) representing two different morphoclimatic contexts comparatively with other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  18. Assessing the predictive capability of randomized tree-based ensembles in streamflow modelling

    NASA Astrophysics Data System (ADS)

    Galelli, S.; Castelletti, A.

    2013-07-01

    Combining randomization methods with ensemble prediction is emerging as an effective option to balance accuracy and computational efficiency in data-driven modelling. In this paper, we investigate the prediction capability of extremely randomized trees (Extra-Trees), in terms of accuracy, explanation ability and computational efficiency, in a streamflow modelling exercise. Extra-Trees are a totally randomized tree-based ensemble method that (i) alleviates the poor generalisation property and tendency to overfitting of traditional standalone decision trees (e.g. CART); (ii) is computationally efficient; and, (iii) allows to infer the relative importance of the input variables, which might help in the ex-post physical interpretation of the model. The Extra-Trees potential is analysed on two real-world case studies - Marina catchment (Singapore) and Canning River (Western Australia) - representing two different morphoclimatic contexts. The evaluation is performed against other tree-based methods (CART and M5) and parametric data-driven approaches (ANNs and multiple linear regression). Results show that Extra-Trees perform comparatively well to the best of the benchmarks (i.e. M5) in both the watersheds, while outperforming the other approaches in terms of computational requirement when adopted on large datasets. In addition, the ranking of the input variable provided can be given a physically meaningful interpretation.

  19. Radio-loud AGN Variability from Propagating Relativistic Jets

    NASA Astrophysics Data System (ADS)

    Li, Yutong; Schuh, Terance; Wiita, Paul J.

    2018-06-01

    The great majority of variable emission in radio-loud AGNs is understood to arise from the relativistic flows of plasma along two oppositely directed jets. We study this process using the Athena hydrodynamics code to simulate propagating three-dimensional relativistic jets for a wide range of input jet velocities and jet-to-ambient matter density ratios. We then focus on those simulations that remain essentially stable for extended distances (60-120 times the jet radius). Adopting results for the densities, pressures and velocities from these propagating simulations we estimate emissivities from each cell. The observed emissivity from each cell is strongly dependent upon its variable Doppler boosting factor, which depends upon the changing bulk velocities in those zones with respect to our viewing angle to the jet. We then sum the approximations to the fluxes from a large number of zones upstream of the primary reconfinement shock. The light curves so produced are similar to those of blazars, although turbulence on sub-grid scales is likely to be important for the variability on the shortest timescales.

  20. Sympathovagal imbalance in hyperthyroidism.

    PubMed

    Burggraaf, J; Tulen, J H; Lalezari, S; Schoemaker, R C; De Meyer, P H; Meinders, A E; Cohen, A F; Pijl, H

    2001-07-01

    We assessed sympathovagal balance in thyrotoxicosis. Fourteen patients with Graves' hyperthyroidism were studied before and after 7 days of treatment with propranolol (40 mg 3 times a day) and in the euthyroid state. Data were compared with those obtained in a group of age-, sex-, and weight-matched controls. Autonomic inputs to the heart were assessed by power spectral analysis of heart rate variability. Systemic exposure to sympathetic neurohormones was estimated on the basis of 24-h urinary catecholamine excretion. The spectral power in the high-frequency domain was considerably reduced in hyperthyroid patients, indicating diminished vagal inputs to the heart. Increased heart rate and mid-frequency/high-frequency power ratio in the presence of reduced total spectral power and increased urinary catecholamine excretion strongly suggest enhanced sympathetic inputs in thyrotoxicosis. All abnormal features of autonomic balance were completely restored to normal in the euthyroid state. beta-Adrenoceptor antagonism reduced heart rate in hyperthyroid patients but did not significantly affect heart rate variability or catecholamine excretion. This is in keeping with the concept of a joint disruption of sympathetic and vagal inputs to the heart underlying changes in heart rate variability. Thus thyrotoxicosis is characterized by profound sympathovagal imbalance, brought about by increased sympathetic activity in the presence of diminished vagal tone.

  1. A geostatistics-informed hierarchical sensitivity analysis method for complex groundwater flow and transport modeling

    NASA Astrophysics Data System (ADS)

    Dai, Heng; Chen, Xingyuan; Ye, Ming; Song, Xuehang; Zachara, John M.

    2017-05-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study, we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multilayer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially distributed input variables.

  2. A Geostatistics-Informed Hierarchical Sensitivity Analysis Method for Complex Groundwater Flow and Transport Modeling

    NASA Astrophysics Data System (ADS)

    Dai, H.; Chen, X.; Ye, M.; Song, X.; Zachara, J. M.

    2017-12-01

    Sensitivity analysis is an important tool for development and improvement of mathematical models, especially for complex systems with a high dimension of spatially correlated parameters. Variance-based global sensitivity analysis has gained popularity because it can quantify the relative contribution of uncertainty from different sources. However, its computational cost increases dramatically with the complexity of the considered model and the dimension of model parameters. In this study we developed a new sensitivity analysis method that integrates the concept of variance-based method with a hierarchical uncertainty quantification framework. Different uncertain inputs are grouped and organized into a multi-layer framework based on their characteristics and dependency relationships to reduce the dimensionality of the sensitivity analysis. A set of new sensitivity indices are defined for the grouped inputs using the variance decomposition method. Using this methodology, we identified the most important uncertainty source for a dynamic groundwater flow and solute transport model at the Department of Energy (DOE) Hanford site. The results indicate that boundary conditions and permeability field contribute the most uncertainty to the simulated head field and tracer plume, respectively. The relative contribution from each source varied spatially and temporally. By using a geostatistical approach to reduce the number of realizations needed for the sensitivity analysis, the computational cost of implementing the developed method was reduced to a practically manageable level. The developed sensitivity analysis method is generally applicable to a wide range of hydrologic and environmental problems that deal with high-dimensional spatially-distributed input variables.

  3. Ultrastructure of spines and associated terminals on brainstem neurons controlling auditory input

    PubMed Central

    Brown, M. Christian; Lee, Daniel J.; Benson, Thane E.

    2013-01-01

    Spines are unique cellular appendages that isolate synaptic input to neurons and play a role in synaptic plasticity. Using the electron microscope, we studied spines and their associated synaptic terminals on three groups of brainstem neurons: tensor tympani motoneurons, stapedius motoneurons, and medial olivocochlear neurons, all of which exert reflexive control of processes in the auditory periphery. These spines are generally simple in shape; they are infrequent and found on the somata as well as the dendrites. Spines do not differ in volume among the three groups of neurons. In all cases, the spines are associated with a synaptic terminal that engulfs the spine rather than abuts its head. The positions of the synapses are variable, and some are found at a distance from the spine, suggesting that the isolation of synaptic input is of diminished importance for these spines. Each group of neurons receives three common types of synaptic terminals. The type of terminal associated with spines of the motoneurons contains pleomorphic vesicles, whereas the type associated with spines of olivocochlear neurons contains large round vesicles. Thus, spine-associated terminals in the motoneurons appear to be associated with inhibitory processes but in olivocochlear neurons they are associated with excitatory processes. PMID:23602963

  4. Effects of wastewater treatment plant effluent inputs on planktonic metabolic rates and microbial community composition in the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Vaquer-Sunyer, Raquel; Reader, Heather E.; Muthusamy, Saraladevi; Lindh, Markus V.; Pinhassi, Jarone; Conley, Daniel J.; Kritzberg, Emma S.

    2016-08-01

    The Baltic Sea is the world's largest area suffering from eutrophication-driven hypoxia. Low oxygen levels are threatening its biodiversity and ecosystem functioning. The main causes for eutrophication-driven hypoxia are high nutrient loadings and global warming. Wastewater treatment plants (WWTP) contribute to eutrophication as they are important sources of nitrogen to coastal areas. Here, we evaluated the effects of wastewater treatment plant effluent inputs on Baltic Sea planktonic communities in four experiments. We tested for effects of effluent inputs on chlorophyll a content, bacterial community composition, and metabolic rates: gross primary production (GPP), net community production (NCP), community respiration (CR) and bacterial production (BP). Nitrogen-rich dissolved organic matter (DOM) inputs from effluents increased bacterial production and decreased primary production and community respiration. Nutrient amendments and seasonally variable environmental conditions lead to lower alpha-diversity and shifts in bacterial community composition (e.g. increased abundance of a few cyanobacterial populations in the summer experiment), concomitant with changes in metabolic rates. An increase in BP and decrease in CR could be caused by high lability of the DOM that can support secondary bacterial production, without an increase in respiration. Increases in bacterial production and simultaneous decreases of primary production lead to more carbon being consumed in the microbial loop, and may shift the ecosystem towards heterotrophy.

  5. Nitrate in groundwater of the United States, 1991-2003

    USGS Publications Warehouse

    Burow, Karen R.; Nolan, Bernard T.; Rupert, Michael G.; Dubrovsky, Neil M.

    2010-01-01

    An assessment of nitrate concentrations in groundwater in the United States indicates that concentrations are highest in shallow, oxic groundwater beneath areas with high N inputs. During 1991-2003, 5101 wells were sampled in 51 study areas throughout the U.S. as part of the U.S. Geological Survey National Water-Quality Assessment (NAWQA) program. The well networks reflect the existing used resource represented by domestic wells in major aquifers (major aquifer studies), and recently recharged groundwater beneath dominant land-surface activities (land-use studies). Nitrate concentrations were highest in shallow groundwater beneath agricultural land use in areas with well-drained soils and oxic geochemical conditions. Nitrate concentrations were lowest in deep groundwater where groundwater is reduced, or where groundwater is older and hence concentrations reflect historically low N application rates. Classification and regression tree analysis was used to identify the relative importance of N inputs, biogeochemical processes, and physical aquifer properties in explaining nitrate concentrations in groundwater. Factors ranked by reduction in sum of squares indicate that dissolved iron concentrations explained most of the variation in groundwater nitrate concentration, followed by manganese, calcium, farm N fertilizer inputs, percent well-drained soils, and dissolved oxygen. Overall, nitrate concentrations in groundwater are most significantly affected by redox conditions, followed by nonpoint-source N inputs. Other water-quality indicators and physical variables had a secondary influence on nitrate concentrations.

  6. Language learning, socioeconomic status, and child-directed speech.

    PubMed

    Schwab, Jessica F; Lew-Williams, Casey

    2016-07-01

    Young children's language experiences and language outcomes are highly variable. Research in recent decades has focused on understanding the extent to which family socioeconomic status (SES) relates to parents' language input to their children and, subsequently, children's language learning. Here, we first review research demonstrating differences in the quantity and quality of language that children hear across low-, mid-, and high-SES groups, but also-and perhaps more importantly-research showing that differences in input and learning also exist within SES groups. Second, in order to better understand the defining features of 'high-quality' input, we highlight findings from laboratory studies examining specific characteristics of the sounds, words, sentences, and social contexts of child-directed speech (CDS) that influence children's learning. Finally, after narrowing in on these particular features of CDS, we broaden our discussion by considering family and community factors that may constrain parents' ability to participate in high-quality interactions with their young children. A unification of research on SES and CDS will facilitate a more complete understanding of the specific means by which input shapes learning, as well as generate ideas for crafting policies and programs designed to promote children's language outcomes. WIREs Cogn Sci 2016, 7:264-275. doi: 10.1002/wcs.1393 For further resources related to this article, please visit the WIREs website. © 2016 Wiley Periodicals, Inc.

  7. Estimating pesticide runoff in small streams.

    PubMed

    Schriever, Carola A; von der Ohe, Peter C; Liess, Matthias

    2007-08-01

    Surface runoff is one of the most important pathways for pesticides to enter surface waters. Mathematical models are employed to characterize its spatio-temporal variability within landscapes, but they must be simple owing to the limited availability and low resolution of data at this scale. This study aimed to validate a simplified spatially-explicit model that is developed for the regional scale to calculate the runoff potential (RP). The RP is a generic indicator of the magnitude of pesticide inputs into streams via runoff. The underlying runoff model considers key environmental factors affecting runoff (precipitation, topography, land use, and soil characteristics), but predicts losses of a generic substance instead of any one pesticide. We predicted and evaluated RP for 20 small streams. RP input data were extracted from governmental databases. Pesticide measurements from a triennial study were used for validation. Measured pesticide concentrations were standardized by the applied mass per catchment and the water solubility of the relevant compounds. The maximum standardized concentration per site and year (runoff loss, R(Loss)) provided a generalized measure of observed pesticide inputs into the streams. Average RP explained 75% (p<0.001) of the variance in R(Loss). Our results imply that the generic indicator can give an adequate estimate of runoff inputs into small streams, wherever data of similar resolution are available. Therefore, we suggest RP for a first quick and cost-effective location of potential runoff hot spots at the landscape level.

  8. Assessment of input uncertainty by seasonally categorized latent variables using SWAT

    USDA-ARS?s Scientific Manuscript database

    Watershed processes have been explored with sophisticated simulation models for the past few decades. It has been stated that uncertainty attributed to alternative sources such as model parameters, forcing inputs, and measured data should be incorporated during the simulation process. Among varyin...

  9. Speaker Invariance for Phonetic Information: an fMRI Investigation

    PubMed Central

    Salvata, Caden; Blumstein, Sheila E.; Myers, Emily B.

    2012-01-01

    The current study explored how listeners map the variable acoustic input onto a common sound structure representation while being able to retain phonetic detail to distinguish among the identity of talkers. An adaptation paradigm was utilized to examine areas which showed an equal neural response (equal release from adaptation) to phonetic change when spoken by the same speaker and when spoken by two different speakers, and insensitivity (failure to show release from adaptation) when the same phonetic input was spoken by a different speaker. Neural areas which showed speaker invariance were located in the anterior portion of the middle superior temporal gyrus bilaterally. These findings provide support for the view that speaker normalization processes allow for the translation of a variable speech input to a common abstract sound structure. That this process appears to occur early in the processing stream, recruiting temporal structures, suggests that this mapping takes place prelexically, before sound structure input is mapped on to lexical representations. PMID:23264714

  10. The input and output management of solid waste using DEA models: A case study at Jengka, Pahang

    NASA Astrophysics Data System (ADS)

    Mohamed, Siti Rosiah; Ghazali, Nur Fadzrina Mohd; Mohd, Ainun Hafizah

    2017-08-01

    Data Envelopment Analysis (DEA) as a tool for obtaining performance indices has been used extensively in several of organizations sector. The ways to improve the efficiency of Decision Making Units (DMUs) is impractical because some of inputs and outputs are uncontrollable and in certain situation its produce weak efficiency which often reflect the impact for operating environment. Based on the data from Alam Flora Sdn. Bhd Jengka, the researcher wants to determine the efficiency of solid waste management (SWM) in town Jengka Pahang using CCRI and CCRO model of DEA and duality formulation with vector average input and output. Three input variables (length collection in meter, frequency time per week in hour and number of garbage truck) and 2 outputs variables (frequency collection and the total solid waste collection in kilogram) are analyzed. As a conclusion, it shows only three roads from 23 roads are efficient that achieve efficiency score 1. Meanwhile, 20 other roads are in an inefficient management.

  11. A mathematical model for Vertical Attitude Takeoff and Landing (VATOL) aircraft simulation. Volume 2: Model equations and base aircraft data

    NASA Technical Reports Server (NTRS)

    Fortenbaugh, R. L.

    1980-01-01

    Equations incorporated in a VATOL six degree of freedom off-line digital simulation program and data for the Vought SF-121 VATOL aircraft concept which served as the baseline for the development of this program are presented. The equations and data are intended to facilitate the development of a piloted VATOL simulation. The equation presentation format is to state the equations which define a particular model segment. Listings of constants required to quantify the model segment, input variables required to exercise the model segment, and output variables required by other model segments are included. In several instances a series of input or output variables are followed by a section number in parentheses which identifies the model segment of origination or termination of those variables.

  12. Mathematical models of the simplest fuzzy PI/PD controllers with skewed input and output fuzzy sets.

    PubMed

    Mohan, B M; Sinha, Arpita

    2008-07-01

    This paper unveils mathematical models for fuzzy PI/PD controllers which employ two skewed fuzzy sets for each of the two-input variables and three skewed fuzzy sets for the output variable. The basic constituents of these models are Gamma-type and L-type membership functions for each input, trapezoidal/triangular membership functions for output, intersection/algebraic product triangular norm, maximum/drastic sum triangular conorm, Mamdani minimum/Larsen product/drastic product inference method, and center of sums defuzzification method. The existing simplest fuzzy PI/PD controller structures derived via symmetrical fuzzy sets become special cases of the mathematical models revealed in this paper. Finally, a numerical example along with its simulation results are included to demonstrate the effectiveness of the simplest fuzzy PI controllers.

  13. Modelling Escherichia coli concentrations in the tidal Scheldt river and estuary.

    PubMed

    de Brauwere, Anouk; de Brye, Benjamin; Servais, Pierre; Passerat, Julien; Deleersnijder, Eric

    2011-04-01

    Recent observations in the tidal Scheldt River and Estuary revealed a poor microbiological water quality and substantial variability of this quality which can hardly be assigned to a single factor. To assess the importance of tides, river discharge, point sources, upstream concentrations, mortality and settling a new model (SLIM-EC) was built. This model was first validated by comparison with the available field measurements of Escherichia coli (E. coli, a common fecal bacterial indicator) concentrations. The model simulations agreed well with the observations, and in particular were able to reproduce the observed long-term median concentrations and variability. Next, the model was used to perform sensitivity runs in which one process/forcing was removed at a time. These simulations revealed that the tide, upstream concentrations and the mortality process are the primary factors controlling the long-term median E. coli concentrations and the observed variability. The tide is crucial to explain the increased concentrations upstream of important inputs, as well as a generally increased variability. Remarkably, the wastewater treatment plants discharging in the study domain do not seem to have a significant impact. This is due to a dilution effect, and to the fact that the concentrations coming from upstream (where large cities are located) are high. Overall, the settling process as it is presently described in the model does not significantly affect the simulated E. coli concentrations. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. The potential of different artificial neural network (ANN) techniques in daily global solar radiation modeling based on meteorological data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrang, M.A.; Assareh, E.; Ghanbarzadeh, A.

    2010-08-15

    The main objective of present study is to predict daily global solar radiation (GSR) on a horizontal surface, based on meteorological variables, using different artificial neural network (ANN) techniques. Daily mean air temperature, relative humidity, sunshine hours, evaporation, and wind speed values between 2002 and 2006 for Dezful city in Iran (32 16'N, 48 25'E), are used in this study. In order to consider the effect of each meteorological variable on daily GSR prediction, six following combinations of input variables are considered: (I)Day of the year, daily mean air temperature and relative humidity as inputs and daily GSR as output.more » (II)Day of the year, daily mean air temperature and sunshine hours as inputs and daily GSR as output. (III)Day of the year, daily mean air temperature, relative humidity and sunshine hours as inputs and daily GSR as output. (IV)Day of the year, daily mean air temperature, relative humidity, sunshine hours and evaporation as inputs and daily GSR as output. (V)Day of the year, daily mean air temperature, relative humidity, sunshine hours and wind speed as inputs and daily GSR as output. (VI)Day of the year, daily mean air temperature, relative humidity, sunshine hours, evaporation and wind speed as inputs and daily GSR as output. Multi-layer perceptron (MLP) and radial basis function (RBF) neural networks are applied for daily GSR modeling based on six proposed combinations. The measured data between 2002 and 2005 are used to train the neural networks while the data for 214 days from 2006 are used as testing data. The comparison of obtained results from ANNs and different conventional GSR prediction (CGSRP) models shows very good improvements (i.e. the predicted values of best ANN model (MLP-V) has a mean absolute percentage error (MAPE) about 5.21% versus 10.02% for best CGSRP model (CGSRP 5)). (author)« less

  15. Fuzzy Neuron: Method and Hardware Realization

    NASA Technical Reports Server (NTRS)

    Krasowski, Michael J.; Prokop, Norman F.

    2014-01-01

    This innovation represents a method by which single-to-multi-input, single-to-many-output system transfer functions can be estimated from input/output data sets. This innovation can be run in the background while a system is operating under other means (e.g., through human operator effort), or may be utilized offline using data sets created from observations of the estimated system. It utilizes a set of fuzzy membership functions spanning the input space for each input variable. Linear combiners associated with combinations of input membership functions are used to create the output(s) of the estimator. Coefficients are adjusted online through the use of learning algorithms.

  16. Group interaction and flight crew performance

    NASA Technical Reports Server (NTRS)

    Foushee, H. Clayton; Helmreich, Robert L.

    1988-01-01

    The application of human-factors analysis to the performance of aircraft-operation tasks by the crew as a group is discussed in an introductory review and illustrated with anecdotal material. Topics addressed include the function of a group in the operational environment, the classification of group performance factors (input, process, and output parameters), input variables and the flight crew process, and the effect of process variables on performance. Consideration is given to aviation safety issues, techniques for altering group norms, ways of increasing crew effort and coordination, and the optimization of group composition.

  17. Appraisal of Weather Research and Forecasting Model Downscaling of Hydro-meteorological Variables and their Applicability for Discharge Prediction: Prognostic Approach for Ungauged Basin

    NASA Astrophysics Data System (ADS)

    Srivastava, P. K.; Han, D.; Rico-Ramirez, M. A.; Bray, M.; Islam, T.; Petropoulos, G.; Gupta, M.

    2015-12-01

    Hydro-meteorological variables such as Precipitation and Reference Evapotranspiration (ETo) are the most important variables for discharge prediction. However, it is not always possible to get access to them from ground based measurements, particularly in ungauged catchments. The mesoscale model WRF (Weather Research & Forecasting model) can be used for prediction of hydro-meteorological variables. However, hydro-meteorologists would like to know how well the downscaled global data products are as compared to ground based measurements and whether it is possible to use the downscaled data for ungauged catchments. Even with gauged catchments, most of the stations have only rain and flow gauges installed. Measurements of other weather hydro-meteorological variables such as solar radiation, wind speed, air temperature, and dew point are usually missing and thus complicate the problems. In this study, for downscaling the global datasets, the WRF model is setup over the Brue catchment with three nested domains (D1, D2 and D3) of horizontal grid spacing of 81 km, 27 km and 9 km are used. The hydro-meteorological variables are downscaled using the WRF model from the National Centers for Enviromental Prediction (NCEP) reanalysis datasets and subsequently used for the ETo estimation using the Penman Monteith equation. The analysis of weather variables and precipitation are compared against the ground based datasets, which indicate that the datasets are in agreement with the observed datasets for complete monitoring period as well as during the seasons except precipitation whose performance is poorer in comparison to the measured rainfall. After a comparison, the WRF estimated precipitation and ETo are then used as a input parameter in the Probability Distributed Model (PDM) for discharge prediction. The input data and model parameter sensitivity analysis and uncertainty estimation are also taken into account for the PDM calibration and prediction following the Generalised Likelihood Uncertainty Estimation (GLUE) approach. The overall analysis suggests that the uncertainty estimates in predicted discharge using WRF downscaled ETo have comparable performance to ground based observed datasets and hence is promising for discharge prediction in the absence of ground based measurements.

  18. Quantitative assessment of drivers of recent global temperature variability: an information theoretic approach

    NASA Astrophysics Data System (ADS)

    Bhaskar, Ankush; Ramesh, Durbha Sai; Vichare, Geeta; Koganti, Triven; Gurubaran, S.

    2017-12-01

    Identification and quantification of possible drivers of recent global temperature variability remains a challenging task. This important issue is addressed adopting a non-parametric information theory technique, the Transfer Entropy and its normalized variant. It distinctly quantifies actual information exchanged along with the directional flow of information between any two variables with no bearing on their common history or inputs, unlike correlation, mutual information etc. Measurements of greenhouse gases: CO2, CH4 and N2O; volcanic aerosols; solar activity: UV radiation, total solar irradiance ( TSI) and cosmic ray flux ( CR); El Niño Southern Oscillation ( ENSO) and Global Mean Temperature Anomaly ( GMTA) made during 1984-2005 are utilized to distinguish driving and responding signals of global temperature variability. Estimates of their relative contributions reveal that CO2 ({˜ } 24 %), CH4 ({˜ } 19 %) and volcanic aerosols ({˜ }23 %) are the primary contributors to the observed variations in GMTA. While, UV ({˜ } 9 %) and ENSO ({˜ } 12 %) act as secondary drivers of variations in the GMTA, the remaining play a marginal role in the observed recent global temperature variability. Interestingly, ENSO and GMTA mutually drive each other at varied time lags. This study assists future modelling efforts in climate science.

  19. The art of spacecraft design: A multidisciplinary challenge

    NASA Technical Reports Server (NTRS)

    Abdi, F.; Ide, H.; Levine, M.; Austel, L.

    1989-01-01

    Actual design turn-around time has become shorter due to the use of optimization techniques which have been introduced into the design process. It seems that what, how and when to use these optimization techniques may be the key factor for future aircraft engineering operations. Another important aspect of this technique is that complex physical phenomena can be modeled by a simple mathematical equation. The new powerful multilevel methodology reduces time-consuming analysis significantly while maintaining the coupling effects. This simultaneous analysis method stems from the implicit function theorem and system sensitivity derivatives of input variables. Use of the Taylor's series expansion and finite differencing technique for sensitivity derivatives in each discipline makes this approach unique for screening dominant variables from nondominant variables. In this study, the current Computational Fluid Dynamics (CFD) aerodynamic and sensitivity derivative/optimization techniques are applied for a simple cone-type forebody of a high-speed vehicle configuration to understand basic aerodynamic/structure interaction in a hypersonic flight condition.

  20. Response of Solar Irradiance to Sunspot-area Variations

    NASA Astrophysics Data System (ADS)

    Dudok de Wit, T.; Kopp, G.; Shapiro, A.; Witzke, V.; Kretzschmar, M.

    2018-02-01

    One of the important open questions in solar irradiance studies is whether long-term variability (i.e., on timescales of years and beyond) can be reconstructed by means of models that describe short-term variability (i.e., days) using solar proxies as inputs. Preminger & Walton showed that the relationship between spectral solar irradiance and proxies of magnetic-flux emergence, such as the daily sunspot area, can be described in the framework of linear system theory by means of the impulse response. We significantly refine that empirical model by removing spurious solar-rotational effects and by including an additional term that captures long-term variations. Our results show that long-term variability cannot be reconstructed from the short-term response of the spectral irradiance, which questions the extension of solar proxy models to these timescales. In addition, we find that the solar response is nonlinear in a way that cannot be corrected simply by applying a rescaling to a sunspot area.

  1. The Gaussian atmospheric transport model and its sensitivity to the joint frequency distribution and parametric variability.

    PubMed

    Hamby, D M

    2002-01-01

    Reconstructed meteorological data are often used in some form of long-term wind trajectory models for estimating the historical impacts of atmospheric emissions. Meteorological data for the straight-line Gaussian plume model are put into a joint frequency distribution, a three-dimensional array describing atmospheric wind direction, speed, and stability. Methods using the Gaussian model and joint frequency distribution inputs provide reasonable estimates of downwind concentration and have been shown to be accurate to within a factor of four. We have used multiple joint frequency distributions and probabilistic techniques to assess the Gaussian plume model and determine concentration-estimate uncertainty and model sensitivity. We examine the straight-line Gaussian model while calculating both sector-averaged and annual-averaged relative concentrations at various downwind distances. The sector-average concentration model was found to be most sensitive to wind speed, followed by horizontal dispersion (sigmaZ), the importance of which increases as stability increases. The Gaussian model is not sensitive to stack height uncertainty. Precision of the frequency data appears to be most important to meteorological inputs when calculations are made for near-field receptors, increasing as stack height increases.

  2. Modeling Nutrient Loading to Watersheds in the Great Lakes Basin: A Detailed Source Model at the Regional Scale

    NASA Astrophysics Data System (ADS)

    Luscz, E.; Kendall, A. D.; Martin, S. L.; Hyndman, D. W.

    2011-12-01

    Watershed nutrient loading models are important tools used to address issues including eutrophication, harmful algal blooms, and decreases in aquatic species diversity. Such approaches have been developed to assess the level and source of nutrient loading across a wide range of scales, yet there is typically a tradeoff between the scale of the model and the level of detail regarding the individual sources of nutrients. To avoid this tradeoff, we developed a detailed source nutrient loading model for every watershed in Michigan's lower peninsula. Sources considered include atmospheric deposition, septic tanks, waste water treatment plants, combined sewer overflows, animal waste from confined animal feeding operations and pastured animals, as well as fertilizer from agricultural, residential, and commercial sources and industrial effluents . Each source is related to readily-available GIS inputs that may vary through time. This loading model was used to assess the importance of sources and landscape factors in nutrient loading rates to watersheds, and how these have changed in recent decades. The results showed the value of detailed source inputs, revealing regional trends while still providing insight to the existence of variability at smaller scales.

  3. Simulating maize yield and biomass with spatial variability of soil field capacity

    USDA-ARS?s Scientific Manuscript database

    Spatial variability in field soil water and other properties is a challenge for system modelers who use only representative values for model inputs, rather than their distributions. In this study, we compared simulation results from a calibrated model with spatial variability of soil field capacity ...

  4. The significance of spatial variability of rainfall on streamflow: A synthetic analysis at the Upper Lee catchment, UK

    NASA Astrophysics Data System (ADS)

    Pechlivanidis, Ilias; McIntyre, Neil; Wheater, Howard

    2017-04-01

    Rainfall, one of the main inputs in hydrological modeling, is a highly heterogeneous process over a wide range of scales in space, and hence the ignorance of the spatial rainfall information could affect the simulated streamflow. Calibration of hydrological model parameters is rarely a straightforward task due to parameter equifinality and parameters' 'nature' to compensate for other uncertainties, i.e. structural and forcing input. In here, we analyse the significance of spatial variability of rainfall on streamflow as a function of catchment scale and type, and antecedent conditions using the continuous time, semi-distributed PDM hydrological model at the Upper Lee catchment, UK. The impact of catchment scale and type is assessed using 11 nested catchments ranging in scale from 25 to 1040 km2, and further assessed by artificially changing the catchment characteristics and translating these to model parameters with uncertainty using model regionalisation. Synthetic rainfall events are introduced to directly relate the change in simulated streamflow to the spatial variability of rainfall. Overall, we conclude that the antecedent catchment wetness and catchment type play an important role in controlling the significance of the spatial distribution of rainfall on streamflow. Results show a relationship between hydrograph characteristics (streamflow peak and volume) and the degree of spatial variability of rainfall for the impermeable catchments under dry antecedent conditions, although this decreases at larger scales; however this sensitivity is significantly undermined under wet antecedent conditions. Although there is indication that the impact of spatial rainfall on streamflow varies as a function of catchment scale, the variability of antecedent conditions between the synthetic catchments seems to mask this significance. Finally, hydrograph responses to different spatial patterns in rainfall depend on assumptions used for model parameter estimation and also the spatial variation in parameters indicating the need of an uncertainty framework in such investigation.

  5. Scheduling the blended solution as industrial CO2 absorber in separation process by back-propagation artificial neural networks.

    PubMed

    Abdollahi, Yadollah; Sairi, Nor Asrina; Said, Suhana Binti Mohd; Abouzari-lotf, Ebrahim; Zakaria, Azmi; Sabri, Mohd Faizul Bin Mohd; Islam, Aminul; Alias, Yatimah

    2015-11-05

    It is believe that 80% industrial of carbon dioxide can be controlled by separation and storage technologies which use the blended ionic liquids absorber. Among the blended absorbers, the mixture of water, N-methyldiethanolamine (MDEA) and guanidinium trifluoromethane sulfonate (gua) has presented the superior stripping qualities. However, the blended solution has illustrated high viscosity that affects the cost of separation process. In this work, the blended fabrication was scheduled with is the process arranging, controlling and optimizing. Therefore, the blend's components and operating temperature were modeled and optimized as input effective variables to minimize its viscosity as the final output by using back-propagation artificial neural network (ANN). The modeling was carried out by four mathematical algorithms with individual experimental design to obtain the optimum topology using root mean squared error (RMSE), R-squared (R(2)) and absolute average deviation (AAD). As a result, the final model (QP-4-8-1) with minimum RMSE and AAD as well as the highest R(2) was selected to navigate the fabrication of the blended solution. Therefore, the model was applied to obtain the optimum initial level of the input variables which were included temperature 303-323 K, x[gua], 0-0.033, x[MDAE], 0.3-0.4, and x[H2O], 0.7-1.0. Moreover, the model has obtained the relative importance ordered of the variables which included x[gua]>temperature>x[MDEA]>x[H2O]. Therefore, none of the variables was negligible in the fabrication. Furthermore, the model predicted the optimum points of the variables to minimize the viscosity which was validated by further experiments. The validated results confirmed the model schedulability. Accordingly, ANN succeeds to model the initial components of the blended solutions as absorber of CO2 capture in separation technologies that is able to industries scale up. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Simulated lumped-parameter system reduced-order adaptive control studies

    NASA Technical Reports Server (NTRS)

    Johnson, C. R., Jr.; Lawrence, D. A.; Taylor, T.; Malakooti, M. V.

    1981-01-01

    Two methods of interpreting the misbehavior of reduced order adaptive controllers are discussed. The first method is based on system input-output description and the second is based on state variable description. The implementation of the single input, single output, autoregressive, moving average system is considered.

  7. A new polytopic approach for the unknown input functional observer design

    NASA Astrophysics Data System (ADS)

    Bezzaoucha, Souad; Voos, Holger; Darouach, Mohamed

    2018-03-01

    In this paper, a constructive procedure to design Functional Unknown Input Observers for nonlinear continuous time systems is proposed under the Polytopic Takagi-Sugeno framework. An equivalent representation for the nonlinear model is achieved using the sector nonlinearity transformation. Applying the Lyapunov theory and the ? attenuation, linear matrix inequalities conditions are deduced which are solved for feasibility to obtain the observer design matrices. To cope with the effect of unknown inputs, classical approach of decoupling the unknown input for the linear case is used. Both algebraic and solver-based solutions are proposed (relaxed conditions). Necessary and sufficient conditions for the existence of the functional polytopic observer are given. For both approaches, the general and particular cases (measurable premise variables, full state estimation with full and reduced order cases) are considered and it is shown that the proposed conditions correspond to the one presented for standard linear case. To illustrate the proposed theoretical results, detailed numerical simulations are presented for a Quadrotor Aerial Robots Landing and a Waste Water Treatment Plant. Both systems are highly nonlinear and represented in a T-S polytopic form with unmeasurable premise variables and unknown inputs.

  8. Nowcasting of Low-Visibility Procedure States with Ordered Logistic Regression at Vienna International Airport

    NASA Astrophysics Data System (ADS)

    Kneringer, Philipp; Dietz, Sebastian; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Low-visibility conditions have a large impact on aviation safety and economic efficiency of airports and airlines. To support decision makers, we develop a statistical probabilistic nowcasting tool for the occurrence of capacity-reducing operations related to low visibility. The probabilities of four different low visibility classes are predicted with an ordered logistic regression model based on time series of meteorological point measurements. Potential predictor variables for the statistical models are visibility, humidity, temperature and wind measurements at several measurement sites. A stepwise variable selection method indicates that visibility and humidity measurements are the most important model inputs. The forecasts are tested with a 30 minute forecast interval up to two hours, which is a sufficient time span for tactical planning at Vienna Airport. The ordered logistic regression models outperform persistence and are competitive with human forecasters.

  9. Simultaneous use of geological, geophysical, and LANDSAT digital data in uranium exploration. [Libya

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Missallati, A.; Prelat, A.E.; Lyon, R.J.P.

    1979-08-01

    The simultaneous use of geological, geophysical and Landsat data in uranium exploration in southern Libya is reported. The values of 43 geological, geophysical and digital data variables, including age and type of rock, geological contacts, aeroradio-metric and aeromagnetic values and brightness ratios, were used as input into a geomathematical model. Stepwise discriminant analysis was used to select grid cells most favorable for detailed mineral exploration and to evaluate the significance of each variable in discriminating between the anomalous (radioactive) and nonanomalous (nonradioactive) areas. It is found that the geological contact relationships, Landsat Bands 6 and Band 7/4 ratio values weremore » most useful in the discrimination. The procedure was found to be statistically and geologically reliable, and applicable to similar regions using only the most important geological and Landsat data.« less

  10. Application of a support vector machine algorithm to the safety precaution technique of medium-low pressure gas regulators

    NASA Astrophysics Data System (ADS)

    Hao, Xuejun; An, Xaioran; Wu, Bo; He, Shaoping

    2018-02-01

    In the gas pipeline system, safe operation of a gas regulator determines the stability of the fuel gas supply, and the medium-low pressure gas regulator of the safety precaution system is not perfect at the present stage in the Beijing Gas Group; therefore, safety precaution technique optimization has important social and economic significance. In this paper, according to the running status of the medium-low pressure gas regulator in the SCADA system, a new method for gas regulator safety precaution based on the support vector machine (SVM) is presented. This method takes the gas regulator outlet pressure data as input variables of the SVM model, the fault categories and degree as output variables, which will effectively enhance the precaution accuracy as well as save significant manpower and material resources.

  11. Cephalometric landmark detection in dental x-ray images using convolutional neural networks

    NASA Astrophysics Data System (ADS)

    Lee, Hansang; Park, Minseok; Kim, Junmo

    2017-03-01

    In dental X-ray images, an accurate detection of cephalometric landmarks plays an important role in clinical diagnosis, treatment and surgical decisions for dental problems. In this work, we propose an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN). For detecting 19 cephalometric landmarks in dental X-ray images, we develop a detection system using CNN-based coordinate-wise regression systems. By viewing x- and y-coordinates of all landmarks as 38 independent variables, multiple CNN-based regression systems are constructed to predict the coordinate variables from input X-ray images. First, each coordinate variable is normalized by the length of either height or width of an image. For each normalized coordinate variable, a CNN-based regression system is trained on training images and corresponding coordinate variable, which is a variable to be regressed. We train 38 regression systems with the same CNN structure on coordinate variables, respectively. Finally, we compute 38 coordinate variables with these trained systems from unseen images and extract 19 landmarks by pairing the regressed coordinates. In experiments, the public database from the Grand Challenges in Dental X-ray Image Analysis in ISBI 2015 was used and the proposed system showed promising performance by successfully locating the cephalometric landmarks within considerable margins from the ground truths.

  12. Plio-Pleistocene evolution of water mass exchange and erosional input at the Atlantic-Arctic gateway

    NASA Astrophysics Data System (ADS)

    Teschner, Claudia; Frank, Martin; Haley, Brian A.; Knies, Jochen

    2016-05-01

    Water mass exchange between the Arctic Ocean and the Norwegian-Greenland Seas has played an important role for the Atlantic thermohaline circulation and Northern Hemisphere climate. We reconstruct past water mass mixing and erosional inputs from the radiogenic isotope compositions of neodymium (Nd), lead (Pb), and strontium (Sr) at Ocean Drilling Program site 911 (leg 151) from 906 m water depth on Yermak Plateau in the Fram Strait over the past 5.2 Myr. The isotopic compositions of past bottom waters were extracted from authigenic oxyhydroxide coatings of the bulk sediments. Neodymium isotope signatures obtained from surface sediments agree well with present-day deepwater ɛNd signature of -11.0 ± 0.2. Prior to 2.7 Ma the Nd and Pb isotope compositions of the bottom waters only show small variations indicative of a consistent influence of Atlantic waters. Since the major intensification of the Northern Hemisphere Glaciation at 2.7 Ma the seawater Nd isotope composition has varied more pronouncedly due to changes in weathering inputs related to the waxing and waning of the ice sheets on Svalbard, the Barents Sea, and the Eurasian shelf, due to changes in water mass exchange and due to the increasing supply of ice-rafted debris (IRD) originating from the Arctic Ocean. The seawater Pb isotope record also exhibits a higher short-term variability after 2.7 Ma, but there is also a trend toward more radiogenic values, which reflects a combination of changes in input sources and enhanced incongruent weathering inputs of Pb released from freshly eroded old continental rocks.

  13. Gravity dependence of the effect of optokinetic stimulation on the subjective visual vertical.

    PubMed

    Ward, Bryan K; Bockisch, Christopher J; Caramia, Nicoletta; Bertolini, Giovanni; Tarnutzer, Alexander Andrea

    2017-05-01

    Accurate and precise estimates of direction of gravity are essential for spatial orientation. According to Bayesian theory, multisensory vestibular, visual, and proprioceptive input is centrally integrated in a weighted fashion based on the reliability of the component sensory signals. For otolithic input, a decreasing signal-to-noise ratio was demonstrated with increasing roll angle. We hypothesized that the weights of vestibular (otolithic) and extravestibular (visual/proprioceptive) sensors are roll-angle dependent and predicted an increased weight of extravestibular cues with increasing roll angle, potentially following the Bayesian hypothesis. To probe this concept, the subjective visual vertical (SVV) was assessed in different roll positions (≤ ± 120°, steps = 30°, n = 10) with/without presenting an optokinetic stimulus (velocity = ± 60°/s). The optokinetic stimulus biased the SVV toward the direction of stimulus rotation for roll angles ≥ ± 30° ( P < 0.005). Offsets grew from 3.9 ± 1.8° (upright) to 22.1 ± 11.8° (±120° roll tilt, P < 0.001). Trial-to-trial variability increased with roll angle, demonstrating a nonsignificant increase when providing optokinetic stimulation. Variability and optokinetic bias were correlated ( R 2 = 0.71, slope = 0.71, 95% confidence interval = 0.57-0.86). An optimal-observer model combining an optokinetic bias with vestibular input reproduced measured errors closely. These findings support the hypothesis of a weighted multisensory integration when estimating direction of gravity with optokinetic stimulation. Visual input was weighted more when vestibular input became less reliable, i.e., at larger roll-tilt angles. However, according to Bayesian theory, the variability of combined cues is always lower than the variability of each source cue. If the observed increase in variability, although nonsignificant, is true, either it must depend on an additional source of variability, added after SVV computation, or it would conflict with the Bayesian hypothesis. NEW & NOTEWORTHY Applying a rotating optokinetic stimulus while recording the subjective visual vertical in different whole body roll angles, we noted the optokinetic-induced bias to correlate with the roll angle. These findings allow the hypothesis that the established optimal weighting of single-sensory cues depending on their reliability to estimate direction of gravity could be extended to a bias caused by visual self-motion stimuli. Copyright © 2017 the American Physiological Society.

  14. Toward a Geoscientific Semantic Web Based on How Geoscientists Talk Across Disciplines

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2015-12-01

    Are there terms and scientific concepts from math and science that almost all geoscientists understand? Is there a limited set of terms, patterns and language elements that geoscientists use for efficient, unambiguous communication that could be used to describe the variables that they measure, store in data sets and use as model inputs and outputs? In this talk it will be argued that the answer to both questions is "yes" by drawing attention to many such patterns and then showing how they have been used to create a rich set of naming conventions for variables called the CSDMS Standard Names. Variables, which store numerical quantities associated with specific objects, are the fundamental currency of science. They are the items that are measured and saved in data sets, which may then be read into models. They are the inputs and outputs of models and the items exchanged between coupled models. They also star in the equations that summarize our scientific knowledge. Carefully constructed, unambiguous and unique labels for commonly used variables therefore provide an attractive mechanism for automatic semantic mediation when variables are to be shared between heterogeous resources. They provide a means to automatically check for semantic equivalence so that variables can be safely shared in resource compositions. A good set of standardized variable names can serve as the hub in a hub-and-spoke solution to semantic mediation, where the "internal vocabularies" of geoscience resources (i.e. data sets and models) are mapped to and from the hub to facilitate interoperability and data sharing. When built from patterns and terms that most geoscientists are already familiar with, these standardized variable names are then "readable" by both humans and machines. Despite the importance of variables in scientific work, most of the ontological work in the geosciences is focused at a higher level that supports finding resources (e.g data sets) but not on describing the contents of those resources. The CSDMS Standard Names have matured continuously since they were first introduced over three years ago. Many recent extensions and applications of them (e.g. different science domains, different projects, new rules, ontological work) as well as their compatibility with the International System of Quantities (ISO 80000) will be discussed.

  15. Compensation opportunities and waste-to-energy plants

    NASA Astrophysics Data System (ADS)

    Rada, E. C.; Castagna, G.; Adami, L.; Torretta, V.; Ragazzi, M.

    2018-05-01

    Compensations are part of the pathway of design of a thermochemical plant. The evolution of the technology of this sector, integrated with adequate mitigations, can allow reaching a level of environmental impact that can be negligible locally. In spite of that, the local acceptance of modern plants is still critical. The global impact on the environment is more complex to define because of the variability of input of the plants. In this context, the role of compensations is very important, opening also to interesting opportunities for the territory, as demonstrated by the analysis reported in this article.

  16. Stochastic empirical loading and dilution model (SELDM) version 1.0.0

    USGS Publications Warehouse

    Granato, Gregory E.

    2013-01-01

    The Stochastic Empirical Loading and Dilution Model (SELDM) is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. Planning-level estimates are defined as the results of analyses used to evaluate alternative management measures; planning-level estimates are recognized to include substantial uncertainties (commonly orders of magnitude). SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physiochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available.

  17. Methodological development for selection of significant predictors explaining fatal road accidents.

    PubMed

    Dadashova, Bahar; Arenas-Ramírez, Blanca; Mira-McWilliams, José; Aparicio-Izquierdo, Francisco

    2016-05-01

    Identification of the most relevant factors for explaining road accident occurrence is an important issue in road safety research, particularly for future decision-making processes in transport policy. However model selection for this particular purpose is still an ongoing research. In this paper we propose a methodological development for model selection which addresses both explanatory variable and adequate model selection issues. A variable selection procedure, TIM (two-input model) method is carried out by combining neural network design and statistical approaches. The error structure of the fitted model is assumed to follow an autoregressive process. All models are estimated using Markov Chain Monte Carlo method where the model parameters are assigned non-informative prior distributions. The final model is built using the results of the variable selection. For the application of the proposed methodology the number of fatal accidents in Spain during 2000-2011 was used. This indicator has experienced the maximum reduction internationally during the indicated years thus making it an interesting time series from a road safety policy perspective. Hence the identification of the variables that have affected this reduction is of particular interest for future decision making. The results of the variable selection process show that the selected variables are main subjects of road safety policy measures. Published by Elsevier Ltd.

  18. Evaluating variable rate fungicide applications for control of Sclerotinia

    USDA-ARS?s Scientific Manuscript database

    Oklahoma peanut growers continue to try to increase yields and reduce input costs. Perhaps the largest input in a peanut crop is fungicide applications. This is especially true for areas in the state that have high disease pressure from Sclerotinia. On average, a single fungicide application cost...

  19. Spatial Variability of Nitrogen Isotope Ratios of Particulate Material from Northwest Atlantic Continental Shelf Waters

    EPA Science Inventory

    Human encroachment on the coastal zone has led to a rise in the delivery of nitrogen (N) to estuarine and near-shore waters. Potential routes of anthropogenic N inputs include export from estuaries, atmospheric deposition, and dissolved N inputs from groundwater outflow. Stable...

  20. Learning a Novel Pattern through Balanced and Skewed Input

    ERIC Educational Resources Information Center

    McDonough, Kim; Trofimovich, Pavel

    2013-01-01

    This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…

  1. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  2. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  3. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  4. 49 CFR 178.337-4 - Joints.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... must be considered as essential variables: Number of passes; thickness of plate; heat input per pass... not be used. The number of passes, thickness of plate, and heat input per pass may not vary more than... machine heat processes, provided such surfaces are remelted in the subsequent welding process. Where there...

  5. Using 87Sr/86Sr ratios to investigate changes in stream chemistry during snowmelt in the Provo River, Utah, USA

    NASA Astrophysics Data System (ADS)

    Hale, C. A.; Carling, G. T.; Fernandez, D. P.; Nelson, S.; Aanderud, Z.; Tingey, D. G.; Dastrup, D.

    2017-12-01

    Water chemistry in mountain streams is variable during spring snowmelt as shallow groundwater flow paths are activated in the watershed, introducing solutes derived from soil water. Sr isotopes and other tracers can be used to differentiate waters that have interacted with soils and dust (shallow groundwater) and bedrock (deep groundwater). To investigate processes controlling water chemistry during snowmelt, we analyzed 87Sr/86Sr ratios, Sr and other trace element concentrations in bulk snowpack, dust, soil, soil water, ephemeral channels, and river water during snowmelt runoff in the upper Provo River watershed in northern Utah, USA, over four years (2014-2017). Strontium concentrations in the river averaged 20 ppb during base flow and decreased to 10 ppb during snowmelt runoff. 87Sr/86Sr ratios were around 0.717 during base flow and decreased to 0.715 in 2014 and 0.713 in 2015 and 2016 during snowmelt, trending towards less radiogenic values of mineral dust inputs in the Uinta Mountain soils. Ephemeral channels, representing shallow flow paths with soil water inputs, had Sr concentrations between 7-20 ppb and 87Sr/86Sr ratios between 0.713-0.716. Snowpack Sr concentrations were generally <2 ppb with 87Sr/86Sr ratios between 0.710-711, similar to atmospheric dust inputs. The less radiogenic 87Sr/86Sr ratios and lower Sr concentrations in the river during snowmelt are likely a result of activating shallow groundwater flow paths, which allows melt water to interact with shallow soils that contain accumulated dust deposits with a less radiogenic 87Sr/86Sr ratio. These results suggest that flow paths and atmospheric dust are important to consider when investigating variable solute loads in mountain streams.

  6. 3-D ballistic transport of ellipsoidal volcanic projectiles considering horizontal wind field and variable shape-dependent drag coefficients

    NASA Astrophysics Data System (ADS)

    Bertin, Daniel

    2017-02-01

    An innovative 3-D numerical model for the dynamics of volcanic ballistic projectiles is presented here. The model focuses on ellipsoidal particles and improves previous approaches by considering horizontal wind field, virtual mass forces, and drag forces subjected to variable shape-dependent drag coefficients. Modeling suggests that the projectile's launch velocity and ejection angle are first-order parameters influencing ballistic trajectories. The projectile's density and minor radius are second-order factors, whereas both intermediate and major radii of the projectile are of third order. Comparing output parameters, assuming different input data, highlights the importance of considering a horizontal wind field and variable shape-dependent drag coefficients in ballistic modeling, which suggests that they should be included in every ballistic model. On the other hand, virtual mass forces should be discarded since they almost do not contribute to ballistic trajectories. Simulation results were used to constrain some crucial input parameters (launch velocity, ejection angle, wind speed, and wind azimuth) of the block that formed the biggest and most distal ballistic impact crater during the 1984-1993 eruptive cycle of Lascar volcano, Northern Chile. Subsequently, up to 106 simulations were performed, whereas nine ejection parameters were defined by a Latin-hypercube sampling approach. Simulation results were summarized as a quantitative probabilistic hazard map for ballistic projectiles. Transects were also done in order to depict aerial hazard zones based on the same probabilistic procedure. Both maps combined can be used as a hazard prevention tool for ground and aerial transits nearby unresting volcanoes.

  7. 39 CFR 3050.22 - Documentation supporting attributable cost estimates in the Postal Service's section 3652 report.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., Demand Side Variability, and Network Variability studies, including input data, processing programs, and... should include the product or product groups carried under each listed contract; (k) Spreadsheets and...

  8. Investigation of energy management strategies for photovoltaic systems - An analysis technique

    NASA Technical Reports Server (NTRS)

    Cull, R. C.; Eltimsahy, A. H.

    1982-01-01

    Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.

  9. Investigation of energy management strategies for photovoltaic systems - An analysis technique

    NASA Astrophysics Data System (ADS)

    Cull, R. C.; Eltimsahy, A. H.

    Progress is reported in formulating energy management strategies for stand-alone PV systems, developing an analytical tool that can be used to investigate these strategies, applying this tool to determine the proper control algorithms and control variables (controller inputs and outputs) for a range of applications, and quantifying the relative performance and economics when compared to systems that do not apply energy management. The analysis technique developed may be broadly applied to a variety of systems to determine the most appropriate energy management strategies, control variables and algorithms. The only inputs required are statistical distributions for stochastic energy inputs and outputs of the system and the system's device characteristics (efficiency and ratings). Although the formulation was originally driven by stand-alone PV system needs, the techniques are also applicable to hybrid and grid connected systems.

  10. Physiological gain leads to high ISI variability in a simple model of a cortical regular spiking cell.

    PubMed

    Troyer, T W; Miller, K D

    1997-07-01

    To understand the interspike interval (ISI) variability displayed by visual cortical neurons (Softky & Koch, 1993), it is critical to examine the dynamics of their neuronal integration, as well as the variability in their synaptic input current. Most previous models have focused on the latter factor. We match a simple integrate-and-fire model to the experimentally measured integrative properties of cortical regular spiking cells (McCormick, Connors, Lighthall, & Prince, 1985). After setting RC parameters, the post-spike voltage reset is set to match experimental measurements of neuronal gain (obtained from in vitro plots of firing frequency versus injected current). Examination of the resulting model leads to an intuitive picture of neuronal integration that unifies the seemingly contradictory 1/square root of N and random walk pictures that have previously been proposed. When ISIs are dominated by postspike recovery, 1/square root of N arguments hold and spiking is regular; after the "memory" of the last spike becomes negligible, spike threshold crossing is caused by input variance around a steady state and spiking is Poisson. In integrate-and-fire neurons matched to cortical cell physiology, steady-state behavior is predominant, and ISIs are highly variable at all physiological firing rates and for a wide range of inhibitory and excitatory inputs.

  11. Analysis on electronic control unit of continuously variable transmission

    NASA Astrophysics Data System (ADS)

    Cao, Shuanggui

    Continuously variable transmission system can ensure that the engine work along the line of best fuel economy, improve fuel economy, save fuel and reduce harmful gas emissions. At the same time, continuously variable transmission allows the vehicle speed is more smooth and improves the ride comfort. Although the CVT technology has made great development, but there are many shortcomings in the CVT. The CVT system of ordinary vehicles now is still low efficiency, poor starting performance, low transmission power, and is not ideal controlling, high cost and other issues. Therefore, many scholars began to study some new type of continuously variable transmission. The transmission system with electronic systems control can achieve automatic control of power transmission, give full play to the characteristics of the engine to achieve optimal control of powertrain, so the vehicle is always traveling around the best condition. Electronic control unit is composed of the core processor, input and output circuit module and other auxiliary circuit module. Input module collects and process many signals sent by sensor and , such as throttle angle, brake signals, engine speed signal, speed signal of input and output shaft of transmission, manual shift signals, mode selection signals, gear position signal and the speed ratio signal, so as to provide its corresponding processing for the controller core.

  12. Simple Sensitivity Analysis for Orion GNC

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch.We describe in this paper a sensitivity analysis tool (Critical Factors Tool or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  13. The many worlds hypothesis of dopamine prediction error: implications of a parallel circuit architecture in the basal ganglia.

    PubMed

    Lau, Brian; Monteiro, Tiago; Paton, Joseph J

    2017-10-01

    Computational models of reinforcement learning (RL) strive to produce behavior that maximises reward, and thus allow software or robots to behave adaptively [1]. At the core of RL models is a learned mapping between 'states'-situations or contexts that an agent might encounter in the world-and actions. A wealth of physiological and anatomical data suggests that the basal ganglia (BG) is important for learning these mappings [2,3]. However, the computations performed by specific circuits are unclear. In this brief review, we highlight recent work concerning the anatomy and physiology of BG circuits that suggest refinements in our understanding of computations performed by the basal ganglia. We focus on one important component of basal ganglia circuitry, midbrain dopamine neurons, drawing attention to data that has been cast as supporting or departing from the RL framework that has inspired experiments in basal ganglia research over the past two decades. We suggest that the parallel circuit architecture of the BG might be expected to produce variability in the response properties of different dopamine neurons, and that variability in response profile may not reflect variable functions, but rather different arguments that serve as inputs to a common function: the computation of prediction error. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Spatial patterns of throughfall isotopic composition at the event and seasonal timescales

    Treesearch

    Scott T. Allen; Richard F. Keim; Jeffrey J. McDonnell

    2015-01-01

    Spatial variability of throughfall isotopic composition in forests is indicative of complex processes occurring in the canopy and remains insufficiently understood to properly characterize precipitation inputs to the catchment water balance. Here we investigate variability of throughfall isotopic composition with the objectives: (1) to quantify the spatial variability...

  15. Modeling of polymer photodegradation for solar cell modules

    NASA Technical Reports Server (NTRS)

    Somersall, A. C.; Guillet, J. E.

    1982-01-01

    It was shown that many of the experimental observations in the photooxidation of hydrocarbon polymers can be accounted for with a computer simulation using an elementary mechanistic model with corresponding rate constants for each reaction. For outdoor applications, however, such as in photovoltaics, the variation of temperature must have important effects on the useful lifetimes of such materials. The data bank necessary to replace the isothermal rate constant values with Arrhenius activation parameters: A (the pre-exponential factor) and E (the activation energy) was searched. The best collection of data assembled to data is summarized. Note, however, that the problem is now considerably enlarged since from a theoretical point of view, with 51 of the input variables replaced with 102 parameters. The sensitivity of the overall scheme is such that even after many computer simulations, a successful photooxidation simulation with the expanded variable set was not completed. Many of the species in the complex process undergo a number of competitive pathways, the relative importance of each being often sensitive to small changes in the calculated rate constant values.

  16. Designing hydrological and financial instruments for small scale farmers in Sub-Saharan Africa: A socio-hydrological analysis

    NASA Astrophysics Data System (ADS)

    Moshtaghi, M.; Pande, S.; Savenije, H. H. G.; den Besten, N. I.

    2016-12-01

    Eighty percent of the farmland in Sub-Saharan Africa is managed by smallholders and they are often economically stressed; low income as a result of poor crop yields. Indeed, smallholders' well-being is naturally important, which often suffers due to hydro-climatic variability and fluctuations in prices of inputs (seeds, fertilizer) and outputs (crops). Appropriate designed insurances can guarantee their wellbeing and food security in whole continent, if they focus on specified requirement of smallholders in each region. In this research, we apply recently developed socio-hydrologic modelling, which interprets a small scale farm system as a coupled system of 6 variables: soil moisture, solid fertility, capital, livestock, fodder and labor availability. By using datasets of potential evaporation, rainfall, land cover and etc, we want to make a comparison between application of yield index insurance, weather index insurance and biomass Index Insurance to highlight the importance of considering the interplay between fertilizer and water availability in food security and also determine type of regional insurance which works better in a certain land.

  17. Prediction of Welded Joint Strength in Plasma Arc Welding: A Comparative Study Using Back-Propagation and Radial Basis Neural Networks

    NASA Astrophysics Data System (ADS)

    Srinivas, Kadivendi; Vundavilli, Pandu R.; Manzoor Hussain, M.; Saiteja, M.

    2016-09-01

    Welding input parameters such as current, gas flow rate and torch angle play a significant role in determination of qualitative mechanical properties of weld joint. Traditionally, it is necessary to determine the weld input parameters for every new welded product to obtain a quality weld joint which is time consuming. In the present work, the effect of plasma arc welding parameters on mild steel was studied using a neural network approach. To obtain a response equation that governs the input-output relationships, conventional regression analysis was also performed. The experimental data was constructed based on Taguchi design and the training data required for neural networks were randomly generated, by varying the input variables within their respective ranges. The responses were calculated for each combination of input variables by using the response equations obtained through the conventional regression analysis. The performances in Levenberg-Marquardt back propagation neural network and radial basis neural network (RBNN) were compared on various randomly generated test cases, which are different from the training cases. From the results, it is interesting to note that for the above said test cases RBNN analysis gave improved training results compared to that of feed forward back propagation neural network analysis. Also, RBNN analysis proved a pattern of increasing performance as the data points moved away from the initial input values.

  18. Shoe inserts and orthotics for sport and physical activities.

    PubMed

    Nigg, B M; Nurse, M A; Stefanyshyn, D J

    1999-07-01

    The purposes of this paper were to discuss the perceived benefits of inserts and orthotics for sport activities and to propose a new concept for inserts and orthotics. There is evidence that inserts or orthotics reduce or prevent movement-related injuries. However, there is limited knowledge about the specific functioning an orthotic or insert provides. The same orthotic or insert is often proposed for different problems. Changes in skeletal movement due to inserts or orthotics seem to be small and not systematic. Based on the results of a study using bone pins, one may question the idea that a major function of orthotics or inserts consists in aligning the skeleton. Impact cushioning with shoe inserts or orthotics is typically below 10%. Such small reductions might not be important for injury reduction. It has been suggested that changes in material properties might produce adjustments in the muscular response of the locomotor system. The foot has various sensors to detect input signals with subject specific thresholds. Subjects with similar sensitivity threshold levels seem to respond in their movement pattern in a similar way. Comfort is an important variable. From a biomechanical point of view, comfort may be related to fit, additional stabilizing muscle work, fatigue, and damping of soft tissue vibrations. Based on the presented evidence, the concept of minimizing muscle work is proposed when using orthotics or inserts. A force signal acts as an input variable on the shoe. The shoe sole acts as a first filter, the insert or orthotic as a second filter, the plantar surface of the foot as a third filter for the force input signal. The filtered information is transferred to the central nervous system that provides a subject specific dynamic response. The subject performs the movement for the task at hand. For a given movement task, the skeleton has a preferred path. If an intervention supports/counteracts the preferred movement path, muscle activity can/must be reduced/increased. Based on this concept, an optimal insert or orthotic would reduce muscle activity, feel comfortable, and should increase performance.

  19. MODELING OF HUMAN EXPOSURE TO IN-VEHICLE PM2.5 FROM ENVIRONMENTAL TOBACCO SMOKE

    PubMed Central

    Cao, Ye; Frey, H. Christopher

    2012-01-01

    Environmental tobacco smoke (ETS) is estimated to be a significant contributor to in-vehicle human exposure to fine particulate matter of 2.5 µm or smaller (PM2.5). A critical assessment was conducted of a mass balance model for estimating PM2.5 concentration with smoking in a motor vehicle. Recommendations for the range of inputs to the mass-balance model are given based on literature review. Sensitivity analysis was used to determine which inputs should be prioritized for data collection. Air exchange rate (ACH) and the deposition rate have wider relative ranges of variation than other inputs, representing inter-individual variability in operations, and inter-vehicle variability in performance, respectively. Cigarette smoking and emission rates, and vehicle interior volume, are also key inputs. The in-vehicle ETS mass balance model was incorporated into the Stochastic Human Exposure and Dose Simulation for Particulate Matter (SHEDS-PM) model to quantify the potential magnitude and variability of in-vehicle exposures to ETS. The in-vehicle exposure also takes into account near-road incremental PM2.5 concentration from on-road emissions. Results of probabilistic study indicate that ETS is a key contributor to the in-vehicle average and high-end exposure. Factors that mitigate in-vehicle ambient PM2.5 exposure lead to higher in-vehicle ETS exposure, and vice versa. PMID:23060732

  20. Computing the structural influence matrix for biological systems.

    PubMed

    Giordano, Giulia; Cuba Samaniego, Christian; Franco, Elisa; Blanchini, Franco

    2016-06-01

    We consider the problem of identifying structural influences of external inputs on steady-state outputs in a biological network model. We speak of a structural influence if, upon a perturbation due to a constant input, the ensuing variation of the steady-state output value has the same sign as the input (positive influence), the opposite sign (negative influence), or is zero (perfect adaptation), for any feasible choice of the model parameters. All these signs and zeros can constitute a structural influence matrix, whose (i, j) entry indicates the sign of steady-state influence of the jth system variable on the ith variable (the output caused by an external persistent input applied to the jth variable). Each entry is structurally determinate if the sign does not depend on the choice of the parameters, but is indeterminate otherwise. In principle, determining the influence matrix requires exhaustive testing of the system steady-state behaviour in the widest range of parameter values. Here we show that, in a broad class of biological networks, the influence matrix can be evaluated with an algorithm that tests the system steady-state behaviour only at a finite number of points. This algorithm also allows us to assess the structural effect of any perturbation, such as variations of relevant parameters. Our method is applied to nontrivial models of biochemical reaction networks and population dynamics drawn from the literature, providing a parameter-free insight into the system dynamics.

  1. Operator control systems and methods for swing-free gantry-style cranes

    DOEpatents

    Feddema, J.T.; Petterson, B.J.; Robinett, R.D. III

    1998-07-28

    A system and method are disclosed for eliminating swing motions in gantry-style cranes while subject to operator control. The present invention comprises an infinite impulse response (IIR) filter and a proportional-integral (PI) feedback controller. The IIR filter receives input signals (commanded velocity or acceleration) from an operator input device and transforms them into output signals in such a fashion that the resulting motion is swing free (i.e., end-point swinging prevented). The parameters of the IIR filter are updated in real time using measurements from a hoist cable length encoder. The PI feedback controller compensates for modeling errors and external disturbances, such as wind or perturbations caused by collision with objects. The PI feedback controller operates on cable swing angle measurements provided by a cable angle sensor. The present invention adjusts acceleration and deceleration to eliminate oscillations. An especially important feature of the present invention is that it compensates for variable-length cable motions from multiple cables attached to a suspended payload. 10 figs.

  2. Operator control systems and methods for swing-free gantry-style cranes

    DOEpatents

    Feddema, John T.; Petterson, Ben J.; Robinett, III, Rush D.

    1998-01-01

    A system and method for eliminating swing motions in gantry-style cranes while subject to operator control is presented. The present invention comprises an infinite impulse response ("IIR") filter and a proportional-integral ("PI") feedback controller (50). The IIR filter receives input signals (46) (commanded velocity or acceleration) from an operator input device (45) and transforms them into output signals (47) in such a fashion that the resulting motion is swing free (i.e., end-point swinging prevented). The parameters of the IIR filter are updated in real time using measurements from a hoist cable length encoder (25). The PI feedback controller compensates for modeling errors and external disturbances, such as wind or perturbations caused by collision with objects. The PI feedback controller operates on cable swing angle measurements provided by a cable angle sensor (27). The present invention adjusts acceleration and deceleration to eliminate oscillations. An especially important feature of the present invention is that it compensates for variable-length cable motions from multiple cables attached to a suspended payload.

  3. Optimization of process parameters in drilling of fibre hybrid composite using Taguchi and grey relational analysis

    NASA Astrophysics Data System (ADS)

    Vijaya Ramnath, B.; Sharavanan, S.; Jeykrishnan, J.

    2017-03-01

    Nowadays quality plays a vital role in all the products. Hence, the development in manufacturing process focuses on the fabrication of composite with high dimensional accuracy and also incurring low manufacturing cost. In this work, an investigation on machining parameters has been performed on jute-flax hybrid composite. Here, the two important responses characteristics like surface roughness and material removal rate are optimized by employing 3 machining input parameters. The input variables considered are drill bit diameter, spindle speed and feed rate. Machining is done on CNC vertical drilling machine at different levels of drilling parameters. Taguchi’s L16 orthogonal array is used for optimizing individual tool parameters. Analysis Of Variance is used to find the significance of individual parameters. The simultaneous optimization of the process parameters is done by grey relational analysis. The results of this investigation shows that, spindle speed and drill bit diameter have most effect on material removal rate and surface roughness followed by feed rate.

  4. Freshwater monsoon related inputs in the Japan Sea: a diatom record from IODP core U1427

    NASA Astrophysics Data System (ADS)

    Ventura, C. P. L.; Lopes, C.

    2016-12-01

    Monsoon rainfall is the life-blood of more than half the world's population. Extensive research is being conducted in order to refine projections regarding the impact of anthropogenic climate change on these systems. The East Asian monsoon (EAM) plays a significant role in large-scale climate variability. Due to its importance to global climate and world's population, there is an urgent need for greater understanding of this system, especially during past climate changes. The input of freshwater from the monsoon precipitation brings specific markers, such as freshwater diatoms and specific diatom ecological assemblages that are preserved in marine sediments. Freshwater diatoms are easily identifiable and have been used in the North Pacific to reconstruct environmental conditions (Lopes et al 2006) and flooding episodes (Lopes and Mix, 2009). Here we show preliminary results of freshwater diatoms records that are linked with river discharge due to increase land rainfall that can be derived from Monsoon rainfall. We extend our preliminary study to the past 400ky.

  5. Neural network simulation of soil NO3 dynamic under potato crop system

    NASA Astrophysics Data System (ADS)

    Goulet-Fortin, Jérôme; Morais, Anne; Anctil, François; Parent, Léon-Étienne; Bolinder, Martin

    2013-04-01

    Nitrate leaching is a major issue in sandy soils intensively cropped to potato. Modelling could test and improve management practices, particularly as regard to the optimal N application rates. Lack of input data is an important barrier for the application of classical process-based models to predict soil NO3 content (SNOC) and NO3 leaching (NOL). Alternatively, data driven models such as neural networks (NN) could better take into account indicators of spatial soil heterogeneity and plant growth pattern such as the leaf area index (LAI), hence reducing the amount of soil information required. The first objective of this study was to evaluate NN and hybrid models to simulate SNOC in the 0-40 cm soil layer considering inter-annual variations, spatial soil heterogeneity and differential N application rates. The second objective was to evaluate the same methodology to simulate seasonal NOL dynamic at 1 m deep. To this aim, multilayer perceptrons with different combinations of driving meteorological variables, functions of the LAI and state variables of external deterministic models have been trained and evaluated. The state variables from external models were: drainage estimated by the CLASS model and the soil temperature estimated by an ICBM subroutine. Results of SNOC simulations were compared to field data collected between 2004 and 2011 at several experimental plots under potato cropping systems in Québec, Eastern Canada. Results of NOL simulation were compared to data obtained in 2012 from 11 suction lysimeters installed in 2 experimental plots under potato cropping systems in the same region. The most performing model for SNOC simulation was obtained using a 4-input hybrid model composed of 1) cumulative LAI, 2) cumulative drainage, 3) soil temperature and 4) day of year. The most performing model for NOL simulation was obtained using a 5-input NN model composed of 1) N fertilization rate at spring, 2) LAI, 3) cumulative rainfall, 4) the day of year and 5) the percentage of clay content. The MAE was 22% for SNOC simulation and 23% for NOL simulation. High sensitivity to LAI suggests that the model may take into account field and sub-field spatial variability and support N management. Further studies are needed to fully validate the method, particularly in the case of NOL simulation.

  6. African crop yield reductions due to increasingly unbalanced Nitrogen and Phosphorus consumption

    NASA Astrophysics Data System (ADS)

    van der Velde, Marijn; Folberth, Christian; Balkovič, Juraj; Ciais, Philippe; Fritz, Steffen; Janssens, Ivan A.; Obersteiner, Michael; See, Linda; Skalský, Rastislav; Xiong, Wei; Peñuealas, Josep

    2014-05-01

    The impact of soil nutrient depletion on crop production has been known for decades, but robust assessments of the impact of increasingly unbalanced nitrogen (N) and phosphorus (P) application rates on crop production are lacking. Here, we use crop response functions based on 741 FAO maize crop trials and EPIC crop modeling across Africa to examine maize yield deficits resulting from unbalanced N:P applications under low, medium, and high input scenarios, for past (1975), current, and future N:P mass ratios of respectively, 1:0.29, 1:0.15, and 1:0.05. At low N inputs (10 kg/ha), current yield deficits amount to 10% but will increase up to 27% under the assumed future N:P ratio, while at medium N inputs (50 kg N/ha), future yield losses could amount to over 40%. The EPIC crop model was then used to simulate maize yields across Africa. The model results showed relative median future yield reductions at low N inputs of 40%, and 50% at medium and high inputs, albeit with large spatial variability. Dominant low-quality soils such as Ferralsols, which are strongly adsorbing P, and Arenosols with a low nutrient retention capacity, are associated with a strong yield decline, although Arenosols show very variable crop yield losses at low inputs. Optimal N:P ratios, i.e. those where the lowest amount of applied P produces the highest yield (given N input) where calculated with EPIC to be as low as 1:0.5. Finally, we estimated the additional P required given current N inputs, and given N inputs that would allow Africa to close yield gaps (ca. 70%). At current N inputs, P consumption would have to increase 2.3-fold to be optimal, and to increase 11.7-fold to close yield gaps. The P demand to overcome these yield deficits would provide a significant additional pressure on current global extraction of P resources.

  7. Reduction of tablet weight variability by optimizing paddle speed in the forced feeder of a high-speed rotary tablet press.

    PubMed

    Peeters, Elisabeth; De Beer, Thomas; Vervaet, Chris; Remon, Jean-Paul

    2015-04-01

    Tableting is a complex process due to the large number of process parameters that can be varied. Knowledge and understanding of the influence of these parameters on the final product quality is of great importance for the industry, allowing economic efficiency and parametric release. The aim of this study was to investigate the influence of paddle speeds and fill depth at different tableting speeds on the weight and weight variability of tablets. Two excipients possessing different flow behavior, microcrystalline cellulose (MCC) and dibasic calcium phosphate dihydrate (DCP), were selected as model powders. Tablets were manufactured via a high-speed rotary tablet press using design of experiments (DoE). During each experiment also the volume of powder in the forced feeder was measured. Analysis of the DoE revealed that paddle speeds are of minor importance for tablet weight but significantly affect volume of powder inside the feeder in case of powders with excellent flowability (DCP). The opposite effect of paddle speed was observed for fairly flowing powders (MCC). Tableting speed played a role in weight and weight variability, whereas changing fill depth exclusively influenced tablet weight. The DoE approach allowed predicting the optimum combination of process parameters leading to minimum tablet weight variability. Monte Carlo simulations allowed assessing the probability to exceed the acceptable response limits if factor settings were varied around their optimum. This multi-dimensional combination and interaction of input variables leading to response criteria with acceptable probability reflected the design space.

  8. Reassessing regime shifts in the North Pacific: incremental climate change and commercial fishing are necessary for explaining decadal-scale biological variability.

    PubMed

    Litzow, Michael A; Mueter, Franz J; Hobday, Alistair J

    2014-01-01

    In areas of the North Pacific that are largely free of overfishing, climate regime shifts - abrupt changes in modes of low-frequency climate variability - are seen as the dominant drivers of decadal-scale ecological variability. We assessed the ability of leading modes of climate variability [Pacific Decadal Oscillation (PDO), North Pacific Gyre Oscillation (NPGO), Arctic Oscillation (AO), Pacific-North American Pattern (PNA), North Pacific Index (NPI), El Niño-Southern Oscillation (ENSO)] to explain decadal-scale (1965-2008) patterns of climatic and biological variability across two North Pacific ecosystems (Gulf of Alaska and Bering Sea). Our response variables were the first principle component (PC1) of four regional climate parameters [sea surface temperature (SST), sea level pressure (SLP), freshwater input, ice cover], and PCs 1-2 of 36 biological time series [production or abundance for populations of salmon (Oncorhynchus spp.), groundfish, herring (Clupea pallasii), shrimp, and jellyfish]. We found that the climate modes alone could not explain ecological variability in the study region. Both linear models (for climate PC1) and generalized additive models (for biology PC1-2) invoking only the climate modes produced residuals with significant temporal trends, indicating that the models failed to capture coherent patterns of ecological variability. However, when the residual climate trend and a time series of commercial fishery catches were used as additional candidate variables, resulting models of biology PC1-2 satisfied assumptions of independent residuals and out-performed models constructed from the climate modes alone in terms of predictive power. As measured by effect size and Akaike weights, the residual climate trend was the most important variable for explaining biology PC1 variability, and commercial catch the most important variable for biology PC2. Patterns of climate sensitivity and exploitation history for taxa strongly associated with biology PC1-2 suggest plausible mechanistic explanations for these modeling results. Our findings suggest that, even in the absence of overfishing and in areas strongly influenced by internal climate variability, climate regime shift effects can only be understood in the context of other ecosystem perturbations. © 2013 John Wiley & Sons Ltd.

  9. Two SPSS programs for interpreting multiple regression results.

    PubMed

    Lorenzo-Seva, Urbano; Ferrando, Pere J; Chico, Eliseo

    2010-02-01

    When multiple regression is used in explanation-oriented designs, it is very important to determine both the usefulness of the predictor variables and their relative importance. Standardized regression coefficients are routinely provided by commercial programs. However, they generally function rather poorly as indicators of relative importance, especially in the presence of substantially correlated predictors. We provide two user-friendly SPSS programs that implement currently recommended techniques and recent developments for assessing the relevance of the predictors. The programs also allow the user to take into account the effects of measurement error. The first program, MIMR-Corr.sps, uses a correlation matrix as input, whereas the second program, MIMR-Raw.sps, uses the raw data and computes bootstrap confidence intervals of different statistics. The SPSS syntax, a short manual, and data files related to this article are available as supplemental materials from http://brm.psychonomic-journals.org/content/supplemental.

  10. No Differences Identified in Transverse Plane Biomechanics Between Medial Pivot and Rotating Platform Total Knee Implant Designs.

    PubMed

    Papagiannis, Georgios I; Roumpelakis, Ilias M; Triantafyllou, Athanasios I; Makris, Ioannis N; Babis, George C

    2016-08-01

    Total knee arthroplasties (TKAs) using well-designed, fixed bearing prostheses, such as medial pivot (MP), have produced good long-term results. Rotating-platform, posterior-stabilized (RP-PS) mobile bearing implants were designed to decrease polyethylene wear. Sagittal and coronal plane TKA biomechanics are well examined and correlated to polyethylene wear. However, limited research findings describe this relationship in transverse plane. We assumed that although axial plane biomechanics might not be the most destructive parameters on polyethylene wear, it is important to clarify their role because both joint kinematics and kinetics in all 3 planes are important input parameters for TKA wear testing (International Organization for Standardization 14243-1 and 14343-3). Our hypothesis was that transverse plane overall range of motion (ROM) and/or peak moment show differences that reflect on wear advantages when compared RP-PS implants to MP designs. Two groups (MPs = 24 and RP-PSs = 22 subjects) were examined by using 3D gait analysis. The variables were total internal-external rotation (IER) ROM and peak IER moments. No statistically significant difference was demonstrated between the 2 groups in kinetics (P = .389) or kinematics (P = .275). In the present study, no wear advantages were found between 2 TKAs. Both designs showed identical kinetics at the transverse plane in level-ground walking. Kinematic analysis could not illustrate any statistically significant difference in terms of overall IER ROM. Nevertheless, kinematic gait pattern differences observed possibly reflect different patterns of joint surface motion or abnormal gait patterns. Thus, wear testing with various input waveforms combined with functional data analysis will be necessary to identify the actual effects of gait variability on polyethylene wear. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. Coupling centennial-scale shoreline change to sea-level rise and coastal morphology in the Gulf of Mexico using a Bayesian network

    USGS Publications Warehouse

    Plant, Nathaniel G.

    2016-01-01

    Predictions of coastal evolution driven by episodic and persistent processes associated with storms and relative sea-level rise (SLR) are required to test our understanding, evaluate our predictive capability, and to provide guidance for coastal management decisions. Previous work demonstrated that the spatial variability of long-term shoreline change can be predicted using observed SLR rates, tide range, wave height, coastal slope, and a characterization of the geomorphic setting. The shoreline is not suf- ficient to indicate which processes are important in causing shoreline change, such as overwash that depends on coastal dune elevations. Predicting dune height is intrinsically important to assess future storm vulnerability. Here, we enhance shoreline-change predictions by including dune height as a vari- able in a statistical modeling approach. Dune height can also be used as an input variable, but it does not improve the shoreline-change prediction skill. Dune-height input does help to reduce prediction uncer- tainty. That is, by including dune height, the prediction is more precise but not more accurate. Comparing hindcast evaluations, better predictive skill was found when predicting dune height (0.8) compared with shoreline change (0.6). The skill depends on the level of detail of the model and we identify an optimized model that has high skill and minimal overfitting. The predictive model can be implemented with a range of forecast scenarios, and we illustrate the impacts of a higher future sea-level. This scenario shows that the shoreline change becomes increasingly erosional and more uncertain. Predicted dune heights are lower and the dune height uncertainty decreases.

  12. Application of neural networks and sensitivity analysis to improved prediction of trauma survival.

    PubMed

    Hunter, A; Kennedy, L; Henry, J; Ferguson, I

    2000-05-01

    The performance of trauma departments is widely audited by applying predictive models that assess probability of survival, and examining the rate of unexpected survivals and deaths. Although the TRISS methodology, a logistic regression modelling technique, is still the de facto standard, it is known that neural network models perform better. A key issue when applying neural network models is the selection of input variables. This paper proposes a novel form of sensitivity analysis, which is simpler to apply than existing techniques, and can be used for both numeric and nominal input variables. The technique is applied to the audit survival problem, and used to analyse the TRISS variables. The conclusions discuss the implications for the design of further improved scoring schemes and predictive models.

  13. Inaccuracies in sediment budgets arising from estimations of tributary sediment inputs: an example from a monitoring network on the southern Colorado plateau

    USGS Publications Warehouse

    Griffiths, Ronald; Topping, David

    2015-01-01

    Sediment budgets are an important tool for understanding how riverine ecosystems respond to perturbations. Changes in the quantity and grain-size distribution of sediment within river systems affect the channel morphology and related habitat resources. It is therefore important for resource managers to know if a channel reach is in a state of sediment accumulation, deficit or stasis. Many studies have estimated sediment loads from ungaged tributaries using regional sediment-yield equations or other similar techniques. While these approaches may be valid in regions where rainfall and geology are uniform over large areas, use of sediment-yield equations may lead to poor estimations of sediment loads in semi-arid climates, where rainfall events, contributing geology, and vegetation have large spatial variability.

  14. The wiring diagram for plant G signaling

    DOE PAGES

    Colaneri, Alejandro C.; Jones, Alan M.

    2014-10-01

    Like electronic circuits, the modular arrangement of cell-signaling networks decides how inputs produce outputs. Animal heterotrimeric guanine nucleotide binding proteins (G-proteins) operate as switches in the circuits that signal between extracellular agonists and intracellular effectors. There still is no biochemical evidence for a receptor or its agonist in the plant G-protein pathways. Plant G-proteins deviate in many important ways from the animal paradigm. This paper covers important discoveries from the last two years that enlighten these differences and ends describing alternative wiring diagrams for the plant signaling circuits regulated by G-proteins. Finally, we propose that plant G-proteins are integrated inmore » the signaling circuits as variable resistor rather than switches, controlling the flux of information in response to the cell's metabolic state.« less

  15. Effect of variable annual precipitation and nutrient input on nitrogen and phosphorus transport from two Midwestern agricultural watersheds

    USDA-ARS?s Scientific Manuscript database

    Precipitation patterns and nutrient inputs impact transport of nitrate (NO3-N) and phosphorus (TP) from Midwest watersheds. Nutrient concentrations and yields from two subsurface-drained watersheds, the Little Cobb River (LCR) in southern Minnesota and the South Fork Iowa River (SFIR) in northern Io...

  16. Software development guidelines

    NASA Technical Reports Server (NTRS)

    Kovalevsky, N.; Underwood, J. M.

    1979-01-01

    Analysis, modularization, flowcharting, existing programs and subroutines, compatibility, input and output data, adaptability to checkout, and general-purpose subroutines are summarized. Statement ordering and numbering, specification statements, variable names, arrays, arithemtical expressions and statements, control statements, input/output, and subroutines are outlined. Intermediate results, desk checking, checkout data, dumps, storage maps, diagnostics, and program timing are reviewed.

  17. Testing an Instructional Model in a University Educational Setting from the Student's Perspective

    ERIC Educational Resources Information Center

    Betoret, Fernando Domenech

    2006-01-01

    We tested a theoretical model that hypothesized relationships between several variables from input, process and product in an educational setting, from the university student's perspective, using structural equation modeling. In order to carry out the analysis, we measured in sequential order the input (referring to students' personal…

  18. DEVELOPING NUTRIENT CRIETERIA FOR ESTUARIES WITH VARIABLE OCEAN INPUTS: AN EXAMPLE FROM THE PACIFIC NORTHWEST

    EPA Science Inventory

    Estuaries in the Pacific Northwest have major intraannual and within estuary variation in sources and magnitudes of nutrient inputs. To develop an approach for setting nutrient criteria for these systems, we conducted a case study for Yaquina Bay, OR based on a synthesis of resea...

  19. A computational model for the prediction of jet entrainment in the vicinity of nozzle boattails (The BOAT code)

    NASA Technical Reports Server (NTRS)

    Dash, S. M.; Pergament, H. S.

    1978-01-01

    The basic code structure is discussed, including the overall program flow and a brief description of all subroutines. Instructions on the preparation of input data, definitions of key FORTRAN variables, sample input and output, and a complete listing of the code are presented.

  20. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  1. Geochemical and tectonic uplift controls on rock nitrogen inputs across terrestrial ecosystems

    NASA Astrophysics Data System (ADS)

    Morford, Scott L.; Houlton, Benjamin Z.; Dahlgren, Randy A.

    2016-02-01

    Rock contains > 99% of Earth's reactive nitrogen (N), but questions remain over the direct importance of rock N weathering inputs to terrestrial biogeochemical cycling. Here we investigate the factors that regulate rock N abundance and develop a new model for quantifying rock N mobilization fluxes across desert to temperate rainforest ecosystems in California, USA. We analyzed the N content of 968 rock samples from 531 locations and compiled 178 cosmogenically derived denudation estimates from across the region to identify landscapes and ecosystems where rocks account for a significant fraction of terrestrial N inputs. Strong coherence between rock N content and geophysical factors, such as protolith, (i.e. parent rock), grain size, and thermal history, are observed. A spatial model that combines rock geochemistry with lithology and topography demonstrates that average rock N reservoirs range from 0.18 to 1.2 kg N m-3 (80 to 534 mg N kg-1) across the nine geomorphic provinces of California and estimates a rock N denudation flux of 20-92 Gg yr-1 across the entire study area (natural atmospheric inputs ~ 140 Gg yr-1). The model highlights regional differences in rock N mobilization and points to the Coast Ranges, Transverse Ranges, and the Klamath Mountains as regions where rock N could contribute meaningfully to ecosystem N cycling. Contrasting these data to global compilations suggests that our findings are broadly applicable beyond California and that the N abundance and variability in rock are well constrained across most of the Earth system.

  2. Learning from adaptive neural dynamic surface control of strict-feedback systems.

    PubMed

    Wang, Min; Wang, Cong

    2015-06-01

    Learning plays an essential role in autonomous control systems. However, how to achieve learning in the nonstationary environment for nonlinear systems is a challenging problem. In this paper, we present learning method for a class of n th-order strict-feedback systems by adaptive dynamic surface control (DSC) technology, which achieves the human-like ability of learning by doing and doing with learned knowledge. To achieve the learning, this paper first proposes stable adaptive DSC with auxiliary first-order filters, which ensures the boundedness of all the signals in the closed-loop system and the convergence of tracking errors in a finite time. With the help of DSC, the derivative of the filter output variable is used as the neural network (NN) input instead of traditional intermediate variables. As a result, the proposed adaptive DSC method reduces greatly the dimension of NN inputs, especially for high-order systems. After the stable DSC design, we decompose the stable closed-loop system into a series of linear time-varying perturbed subsystems. Using a recursive design, the recurrent property of NN input variables is easily verified since the complexity is overcome using DSC. Subsequently, the partial persistent excitation condition of the radial basis function NN is satisfied. By combining a state transformation, accurate approximations of the closed-loop system dynamics are recursively achieved in a local region along recurrent orbits. Then, the learning control method using the learned knowledge is proposed to achieve the closed-loop stability and the improved control performance. Simulation studies are performed to demonstrate the proposed scheme can not only reuse the learned knowledge to achieve the better control performance with the faster tracking convergence rate and the smaller tracking error but also greatly alleviate the computational burden because of reducing the number and complexity of NN input variables.

  3. Assessment of model behavior and acceptable forcing data uncertainty in the context of land surface soil moisture estimation

    NASA Astrophysics Data System (ADS)

    Dumedah, Gift; Walker, Jeffrey P.

    2017-03-01

    The sources of uncertainty in land surface models are numerous and varied, from inaccuracies in forcing data to uncertainties in model structure and parameterizations. Majority of these uncertainties are strongly tied to the overall makeup of the model, but the input forcing data set is independent with its accuracy usually defined by the monitoring or the observation system. The impact of input forcing data on model estimation accuracy has been collectively acknowledged to be significant, yet its quantification and the level of uncertainty that is acceptable in the context of the land surface model to obtain a competitive estimation remain mostly unknown. A better understanding is needed about how models respond to input forcing data and what changes in these forcing variables can be accommodated without deteriorating optimal estimation of the model. As a result, this study determines the level of forcing data uncertainty that is acceptable in the Joint UK Land Environment Simulator (JULES) to competitively estimate soil moisture in the Yanco area in south eastern Australia. The study employs hydro genomic mapping to examine the temporal evolution of model decision variables from an archive of values obtained from soil moisture data assimilation. The data assimilation (DA) was undertaken using the advanced Evolutionary Data Assimilation. Our findings show that the input forcing data have significant impact on model output, 35% in root mean square error (RMSE) for 5cm depth of soil moisture and 15% in RMSE for 15cm depth of soil moisture. This specific quantification is crucial to illustrate the significance of input forcing data spread. The acceptable uncertainty determined based on dominant pathway has been validated and shown to be reliable for all forcing variables, so as to provide optimal soil moisture. These findings are crucial for DA in order to account for uncertainties that are meaningful from the model standpoint. Moreover, our results point to a proper treatment of input forcing data in general land surface and hydrological model estimation.

  4. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  5. SU-E-T-206: Improving Radiotherapy Toxicity Based On Artificial Neural Network (ANN) for Head and Neck Cancer Patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, Daniel D; Wernicke, A Gabriella; Nori, Dattatreyudu

    Purpose/Objective(s): The aim of this study is to build the estimator of toxicity using artificial neural network (ANN) for head and neck cancer patients Materials/Methods: An ANN can combine variables into a predictive model during training and considered all possible correlations of variables. We constructed an ANN based on the data from 73 patients with advanced H and N cancer treated with external beam radiotherapy and/or chemotherapy at our institution. For the toxicity estimator we defined input data including age, sex, site, stage, pathology, status of chemo, technique of external beam radiation therapy (EBRT), length of treatment, dose of EBRT,more » status of post operation, length of follow-up, the status of local recurrences and distant metastasis. These data were digitized based on the significance and fed to the ANN as input nodes. We used 20 hidden nodes (for the 13 input nodes) to take care of the correlations of input nodes. For training ANN, we divided data into three subsets such as training set, validation set and test set. Finally, we built the estimator for the toxicity from ANN output. Results: We used 13 input variables including the status of local recurrences and distant metastasis and 20 hidden nodes for correlations. 59 patients for training set, 7 patients for validation set and 7 patients for test set and fed the inputs to Matlab neural network fitting tool. We trained the data within 15% of errors of outcome. In the end we have the toxicity estimation with 74% of accuracy. Conclusion: We proved in principle that ANN can be a very useful tool for predicting the RT outcomes for high risk H and N patients. Currently we are improving the results using cross validation.« less

  6. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  7. Dust Emission Modeling Incorporating Land Cover Parameterizations in the Chihuahuan Desert and Dissemination of Data Suites

    NASA Astrophysics Data System (ADS)

    Olgin, J. G.; Pennington, D. D.; Webb, N.

    2017-12-01

    A variety of models have been developed to better understand dust emissions - from initial erosive event to entrainment to transport through the atmosphere. Many of these models have been used to analyze dust emissions by representing atmospheric and surface variables, such as wind and soil moisture respectively, in a numerical model to determine the resulting dust emissions. Pertinent to modeling these variables, there are three important factors influencing emissions: 1) Friction velocity threshold based on wind interactions with dust, 2) horizontal flux (saltation) of dust and 3) vertical dust flux. While all of the existing models incorporate these processes, additional improvements are needed to yield results reflective of recorded data of dust events. Our investigation focuses on explicitly identifying specific land cover (LC) elements unique to the Chihuahan desert that contribute to aerodynamic roughness length (Zo); a main component to dust emission and key to provide results more representative of known dust events in semi-arid regions. These elements will be formulated into computer model inputs by conducting analysis (e.g. geostatistics) on field and satellite data to ascertain core LC characteristics responsible for affecting wind velocities (e.g. wind shadowing effects), which are conducive to dust emissions. This inputs will be used in a modified program using the Weather and Research Forecast model (WRF) to replicate previously recorded dust events. Results from this study will be presented here.

  8. Dynamic estuarine plumes and fronts: importance to small fish and plankton in coastal waters of NSW, Australia

    NASA Astrophysics Data System (ADS)

    Kingsford, M. J.; Suthers, I. M.

    1994-05-01

    In 1990, low density estuarine plumes in the vicinity of Botany Bay, Australia, extended up to 11 km across a narrow continental shelf ( ca 25 km) on ebb tides. The shape and seaward extent of plumes varied according to a combination of state of the tide, freshwater input and the direction and intensity of coastal currents. Offshore plumes dissipated on the flood tide and fronts reformed at the entrance of Botany Bay. Major differences in the abundance and composition of ichthyoplankton and other zooplankton were found over a 400-800 m stretch of water encompassing waters of the plume, front and ocean on seven occasions. For example, highest abundances of the fishes Gobiidae, Sillaginidae, Gerreidae and Sparidae as well as barnacle larvae and fish eggs were found in plumes. Cross-shelf distribution patterns of zooplankton, therefore, are influenced by plumes. Distinct assemblages of plankters accumulated in fronts, e.g. fishes of the Mugilidae and Gonorynchidae and other zooplankters (e.g. Jaxea sp.). Accumulation in fronts was variable and may relate to variable convergence according to the tide. We argue that plumes provide a significant cue to larvae in coastal waters that an estuary is nearby. Moreover, although many larvae may be retained in the turbid waters of plumes associated with riverine input, larvae are potentially exported in surface waters on ebb tides.

  9. Modeling road-cycling performance.

    PubMed

    Olds, T S; Norton, K I; Lowe, E L; Olive, S; Reay, F; Ly, S

    1995-04-01

    This paper presents a complete set of equations for a "first principles" mathematical model of road-cycling performance, including corrections for the effect of winds, tire pressure and wheel radius, altitude, relative humidity, rotational kinetic energy, drafting, and changed drag. The relevant physiological, biophysical, and environmental variables were measured in 41 experienced cyclists completing a 26-km road time trial. The correlation between actual and predicted times was 0.89 (P < or = 0.0001), with a mean difference of 0.74 min (1.73% of mean performance time) and a mean absolute difference of 1.65 min (3.87%). Multiple simulations were performed where model inputs were randomly varied using a normal distribution about the measured values with a SD equivalent to the estimated day-to-day variability or technical error of measurement in each of the inputs. This analysis yielded 95% confidence limits for the predicted times. The model suggests that the main physiological factors contributing to road-cycling performance are maximal O2 consumption, fractional utilization of maximal O2 consumption, mechanical efficiency, and projected frontal area. The model is then applied to some practical problems in road cycling: the effect of drafting, the advantage of using smaller front wheels, the effects of added mass, the importance of rotational kinetic energy, the effect of changes in drag due to changes in bicycle configuration, the normalization of performances under different conditions, and the limits of human performance.

  10. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, J.W.; Biscardi, R.W.

    1991-03-19

    An electronic measurement circuit is disclosed for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals. 2 figures.

  11. Apparatus and method for detecting and measuring changes in linear relationships between a number of high frequency signals

    DOEpatents

    Bittner, John W.; Biscardi, Richard W.

    1991-01-01

    An electronic measurement circuit for high speed comparison of the relative amplitudes of a predetermined number of electrical input signals independent of variations in the magnitude of the sum of the signals. The circuit includes a high speed electronic switch that is operably connected to receive on its respective input terminals one of said electrical input signals and to have its common terminal serve as an input for a variable-gain amplifier-detector circuit that is operably connected to feed its output to a common terminal of a second high speed electronic switch. The respective terminals of the second high speed electronic switch are operably connected to a plurality of integrating sample and hold circuits, which in turn have their outputs connected to a summing logic circuit that is operable to develop first, second and third output voltages, the first output voltage being proportional to a predetermined ratio of sums and differences between the compared input signals, the second output voltage being proportional to a second summed ratio of predetermined sums and differences between said input signals, and the third output voltage being proportional to the sum of signals to the summing logic circuit. A servo system that is operably connected to receive said third output signal and compare it with a reference voltage to develop a slowly varying feedback voltage to control the variable-gain amplifier in said common amplifier-detector circuit in order to make said first and second output signals independent of variations in the magnitude of the sum of said input signals.

  12. An exact algebraic solution of the infimum in H-infinity optimization with output feedback

    NASA Technical Reports Server (NTRS)

    Chen, Ben M.; Saberi, Ali; Ly, Uy-Loi

    1991-01-01

    This paper presents a simple and noniterative procedure for the computation of the exact value of the infimum in the standard H-infinity-optimal control with output feedback. The problem formulation is general and does not place any restrictions on the direct feedthrough terms between the control input and the controlled output variables, and between the disturbance input and the measurement output variables. The method is applicable to systems that satisfy (1) the transfer function from the control input to the controlled output is right-invertible and has no invariant zeros on the j(w) axis and, (2) the transfer function from the disturbance to the measurement output is left-invertible and has no invariant zeros on the j(w) axis. A set of necessary and sufficient conditions for the solvability of H-infinity-almost disturbance decoupling problem via measurement feedback with internal stability is also given.

  13. Scenario planning for water resource management in semi arid zone

    NASA Astrophysics Data System (ADS)

    Gupta, Rajiv; Kumar, Gaurav

    2018-06-01

    Scenario planning for water resource management in semi arid zone is performed using systems Input-Output approach of time domain analysis. This approach derived the future weights of input variables of the hydrological system from their precedent weights. Input variables considered here are precipitation, evaporation, population and crop irrigation. Ingles & De Souza's method and Thornthwaite model have been used to estimate runoff and evaporation respectively. Difference between precipitation inflow and the sum of runoff and evaporation has been approximated as groundwater recharge. Population and crop irrigation derived the total water demand. Compensation of total water demand by groundwater recharge has been analyzed. Further compensation has been evaluated by proposing efficient methods of water conservation. The best measure to be adopted for water conservation is suggested based on the cost benefit analysis. A case study for nine villages in Chirawa region of district Jhunjhunu, Rajasthan (India) validates the model.

  14. Uncertainty, Sensitivity Analysis, and Causal Identification in the Arctic using a Perturbed Parameter Ensemble of the HiLAT Climate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunke, Elizabeth Clare; Urrego Blanco, Jorge Rolando; Urban, Nathan Mark

    Coupled climate models have a large number of input parameters that can affect output uncertainty. We conducted a sensitivity analysis of sea ice proper:es and Arc:c related climate variables to 5 parameters in the HiLAT climate model: air-ocean turbulent exchange parameter (C), conversion of water vapor to clouds (cldfrc_rhminl) and of ice crystals to snow (micro_mg_dcs), snow thermal conduc:vity (ksno), and maximum snow grain size (rsnw_mlt). We used an elementary effect (EE) approach to rank their importance for output uncertainty. EE is an extension of one-at-a-time sensitivity analyses, but it is more efficient in sampling multi-dimensional parameter spaces. We lookedmore » for emerging relationships among climate variables across the model ensemble, and used causal discovery algorithms to establish potential pathways for those relationships.« less

  15. Prediction of near-surface soil moisture at large scale by digital terrain modeling and neural networks.

    PubMed

    Lavado Contador, J F; Maneta, M; Schnabel, S

    2006-10-01

    The capability of Artificial Neural Network models to forecast near-surface soil moisture at fine spatial scale resolution has been tested for a 99.5 ha watershed located in SW Spain using several easy to achieve digital models of topographic and land cover variables as inputs and a series of soil moisture measurements as training data set. The study methods were designed in order to determining the potentials of the neural network model as a tool to gain insight into soil moisture distribution factors and also in order to optimize the data sampling scheme finding the optimum size of the training data set. Results suggest the efficiency of the methods in forecasting soil moisture, as a tool to assess the optimum number of field samples, and the importance of the variables selected in explaining the final map obtained.

  16. Artificial neural networks for the performance prediction of heat pump hot water heaters

    NASA Astrophysics Data System (ADS)

    Mathioulakis, E.; Panaras, G.; Belessiotis, V.

    2018-02-01

    The rapid progression in the use of heat pumps, due to the decrease in the equipment cost, together with the favourable economics of the consumed electrical energy, has been combined with the wide dissemination of air-to-water heat pumps (AWHPs) in the residential sector. The entrance of the respective systems in the commercial sector has made important the modelling of the processes. In this work, the suitability of artificial neural networks (ANN) in the modelling of AWHPs is investigated. The ambient air temperature in the evaporator inlet and the water temperature in the condenser inlet have been selected as the input variables; energy performance indices and quantities characterising the operation of the system have been selected as output variables. The results verify that the, easy-to-implement, trained ANN can represent an effective tool for the prediction of the AWHP performance in various operation conditions and the parametrical investigation of their behaviour.

  17. Frequency Preference Response to Oscillatory Inputs in Two-dimensional Neural Models: A Geometric Approach to Subthreshold Amplitude and Phase Resonance.

    PubMed

    Rotstein, Horacio G

    2014-01-01

    We investigate the dynamic mechanisms of generation of subthreshold and phase resonance in two-dimensional linear and linearized biophysical (conductance-based) models, and we extend our analysis to account for the effect of simple, but not necessarily weak, types of nonlinearities. Subthreshold resonance refers to the ability of neurons to exhibit a peak in their voltage amplitude response to oscillatory input currents at a preferred non-zero (resonant) frequency. Phase-resonance refers to the ability of neurons to exhibit a zero-phase (or zero-phase-shift) response to oscillatory input currents at a non-zero (phase-resonant) frequency. We adapt the classical phase-plane analysis approach to account for the dynamic effects of oscillatory inputs and develop a tool, the envelope-plane diagrams, that captures the role that conductances and time scales play in amplifying the voltage response at the resonant frequency band as compared to smaller and larger frequencies. We use envelope-plane diagrams in our analysis. We explain why the resonance phenomena do not necessarily arise from the presence of imaginary eigenvalues at rest, but rather they emerge from the interplay of the intrinsic and input time scales. We further explain why an increase in the time-scale separation causes an amplification of the voltage response in addition to shifting the resonant and phase-resonant frequencies. This is of fundamental importance for neural models since neurons typically exhibit a strong separation of time scales. We extend this approach to explain the effects of nonlinearities on both resonance and phase-resonance. We demonstrate that nonlinearities in the voltage equation cause amplifications of the voltage response and shifts in the resonant and phase-resonant frequencies that are not predicted by the corresponding linearized model. The differences between the nonlinear response and the linear prediction increase with increasing levels of the time scale separation between the voltage and the gating variable, and they almost disappear when both equations evolve at comparable rates. In contrast, voltage responses are almost insensitive to nonlinearities located in the gating variable equation. The method we develop provides a framework for the investigation of the preferred frequency responses in three-dimensional and nonlinear neuronal models as well as simple models of coupled neurons.

  18. Has solar variability caused climate change that affected human culture?

    NASA Astrophysics Data System (ADS)

    Feynman, Joan

    If solar variability affects human culture it most likely does so by changing the climate in which the culture operates. Variations in the solar radiative input to the Earth's atmosphere have often been suggested as a cause of such climate change on time scales from decades to tens of millennia. In the last 20 years there has been enormous progress in our knowledge of the many fields of research that impinge on this problem; the history of the solar output, the effect of solar variability on the Earth's mean climate and its regional patterns, the history of the Earth's climate and the history of mankind and human culture. This new knowledge encourages revisiting the question asked in the title of this talk. Several important historical events have been reliably related to climate change including the Little Ice Age in northern Europe and the collapse of the Classical Mayan civilization in the 9th century AD. In the first section of this paper we discus these historical events and review the evidence that they were caused by changes in the solar output. Perhaps the most important event in the history of mankind was the development of agricultural societies. This began to occur almost 12,000 years ago when the climate changed from the Pleistocene to the modern climate of the Holocene. In the second section of the paper we will discuss the suggestion ( Feynman and Ruzmaikin, 2007) that climate variability was the reason agriculture developed when it did and not before.

  19. Automatic insulation resistance testing apparatus

    DOEpatents

    Wyant, Francis J.; Nowlen, Steven P.; Luker, Spencer M.

    2005-06-14

    An apparatus and method for automatic measurement of insulation resistances of a multi-conductor cable. In one embodiment of the invention, the apparatus comprises a power supply source, an input measuring means, an output measuring means, a plurality of input relay controlled contacts, a plurality of output relay controlled contacts, a relay controller and a computer. In another embodiment of the invention the apparatus comprises a power supply source, an input measuring means, an output measuring means, an input switching unit, an output switching unit and a control unit/data logger. Embodiments of the apparatus of the invention may also incorporate cable fire testing means. The apparatus and methods of the present invention use either voltage or current for input and output measured variables.

  20. PM(10) emission forecasting using artificial neural networks and genetic algorithm input variable optimization.

    PubMed

    Antanasijević, Davor Z; Pocajt, Viktor V; Povrenović, Dragan S; Ristić, Mirjana Đ; Perić-Grujić, Aleksandra A

    2013-01-15

    This paper describes the development of an artificial neural network (ANN) model for the forecasting of annual PM(10) emissions at the national level, using widely available sustainability and economical/industrial parameters as inputs. The inputs for the model were selected and optimized using a genetic algorithm and the ANN was trained using the following variables: gross domestic product, gross inland energy consumption, incineration of wood, motorization rate, production of paper and paperboard, sawn wood production, production of refined copper, production of aluminum, production of pig iron and production of crude steel. The wide availability of the input parameters used in this model can overcome a lack of data and basic environmental indicators in many countries, which can prevent or seriously impede PM emission forecasting. The model was trained and validated with the data for 26 EU countries for the period from 1999 to 2006. PM(10) emission data, collected through the Convention on Long-range Transboundary Air Pollution - CLRTAP and the EMEP Programme or as emission estimations by the Regional Air Pollution Information and Simulation (RAINS) model, were obtained from Eurostat. The ANN model has shown very good performance and demonstrated that the forecast of PM(10) emission up to two years can be made successfully and accurately. The mean absolute error for two-year PM(10) emission prediction was only 10%, which is more than three times better than the predictions obtained from the conventional multi-linear regression and principal component regression models that were trained and tested using the same datasets and input variables. Copyright © 2012 Elsevier B.V. All rights reserved.

  1. Computer-program documentation of an interactive-accounting model to simulate streamflow, water quality, and water-supply operations in a river basin

    USGS Publications Warehouse

    Burns, A.W.

    1988-01-01

    This report describes an interactive-accounting model used to simulate streamflow, chemical-constituent concentrations and loads, and water-supply operations in a river basin. The model uses regression equations to compute flow from incremental (internode) drainage areas. Conservative chemical constituents (typically dissolved solids) also are computed from regression equations. Both flow and water quality loads are accumulated downstream. Optionally, the model simulates the water use and the simplified groundwater systems of a basin. Water users include agricultural, municipal, industrial, and in-stream users , and reservoir operators. Water users list their potential water sources, including direct diversions, groundwater pumpage, interbasin imports, or reservoir releases, in the order in which they will be used. Direct diversions conform to basinwide water law priorities. The model is interactive, and although the input data exist in files, the user can modify them interactively. A major feature of the model is its color-graphic-output options. This report includes a description of the model, organizational charts of subroutines, and examples of the graphics. Detailed format instructions for the input data, example files of input data, definitions of program variables, and listing of the FORTRAN source code are Attachments to the report. (USGS)

  2. Muscle synergies in neuroscience and robotics: from input-space to task-space perspectives.

    PubMed

    Alessandro, Cristiano; Delis, Ioannis; Nori, Francesco; Panzeri, Stefano; Berret, Bastien

    2013-01-01

    In this paper we review the works related to muscle synergies that have been carried-out in neuroscience and control engineering. In particular, we refer to the hypothesis that the central nervous system (CNS) generates desired muscle contractions by combining a small number of predefined modules, called muscle synergies. We provide an overview of the methods that have been employed to test the validity of this scheme, and we show how the concept of muscle synergy has been generalized for the control of artificial agents. The comparison between these two lines of research, in particular their different goals and approaches, is instrumental to explain the computational implications of the hypothesized modular organization. Moreover, it clarifies the importance of assessing the functional role of muscle synergies: although these basic modules are defined at the level of muscle activations (input-space), they should result in the effective accomplishment of the desired task. This requirement is not always explicitly considered in experimental neuroscience, as muscle synergies are often estimated solely by analyzing recorded muscle activities. We suggest that synergy extraction methods should explicitly take into account task execution variables, thus moving from a perspective purely based on input-space to one grounded on task-space as well.

  3. Dynamic Target Match Signals in Perirhinal Cortex Can Be Explained by Instantaneous Computations That Act on Dynamic Input from Inferotemporal Cortex

    PubMed Central

    Pagan, Marino

    2014-01-01

    Finding sought objects requires the brain to combine visual and target signals to determine when a target is in view. To investigate how the brain implements these computations, we recorded neural responses in inferotemporal cortex (IT) and perirhinal cortex (PRH) as macaque monkeys performed a delayed-match-to-sample target search task. Our data suggest that visual and target signals were combined within or before IT in the ventral visual pathway and then passed onto PRH, where they were reformatted into a more explicit target match signal over ∼10–15 ms. Accounting for these dynamics in PRH did not require proposing dynamic computations within PRH itself but, rather, could be attributed to instantaneous PRH computations performed upon an input representation from IT that changed with time. We found that the dynamics of the IT representation arose from two commonly observed features: individual IT neurons whose response preferences were not simply rescaled with time and variable response latencies across the population. Our results demonstrate that these types of time-varying responses have important consequences for downstream computation and suggest that dynamic representations can arise within a feedforward framework as a consequence of instantaneous computations performed upon time-varying inputs. PMID:25122904

  4. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    2016-12-01

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  5. Music preferences with hearing aids: effects of signal properties, compression settings, and listener characteristics.

    PubMed

    Croghan, Naomi B H; Arehart, Kathryn H; Kates, James M

    2014-01-01

    Current knowledge of how to design and fit hearing aids to optimize music listening is limited. Many hearing-aid users listen to recorded music, which often undergoes compression limiting (CL) in the music industry. Therefore, hearing-aid users may experience twofold effects of compression when listening to recorded music: music-industry CL and hearing-aid wide dynamic-range compression (WDRC). The goal of this study was to examine the roles of input-signal properties, hearing-aid processing, and individual variability in the perception of recorded music, with a focus on the effects of dynamic-range compression. A group of 18 experienced hearing-aid users made paired-comparison preference judgments for classical and rock music samples using simulated hearing aids. Music samples were either unprocessed before hearing-aid input or had different levels of music-industry CL. Hearing-aid conditions included linear gain and individually fitted WDRC. Combinations of four WDRC parameters were included: fast release time (50 msec), slow release time (1,000 msec), three channels, and 18 channels. Listeners also completed several psychophysical tasks. Acoustic analyses showed that CL and WDRC reduced temporal envelope contrasts, changed amplitude distributions across the acoustic spectrum, and smoothed the peaks of the modulation spectrum. Listener judgments revealed that fast WDRC was least preferred for both genres of music. For classical music, linear processing and slow WDRC were equally preferred, and the main effect of number of channels was not significant. For rock music, linear processing was preferred over slow WDRC, and three channels were preferred to 18 channels. Heavy CL was least preferred for classical music, but the amount of CL did not change the patterns of WDRC preferences for either genre. Auditory filter bandwidth as estimated from psychophysical tuning curves was associated with variability in listeners' preferences for classical music. Fast, multichannel WDRC often leads to poor music quality, whereas linear processing or slow WDRC are generally preferred. Furthermore, the effect of WDRC is more important for music preferences than music-industry CL applied to signals before the hearing-aid input stage. Variability in hearing-aid users' perceptions of music quality may be partially explained by frequency resolution abilities.

  6. NSR&D Program Fiscal Year 2015 Funded Research Stochastic Modeling of Radioactive Material Releases Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrus, Jason P.; Pope, Chad; Toston, Mary

    Nonreactor nuclear facilities operating under the approval authority of the U.S. Department of Energy use unmitigated hazard evaluations to determine if potential radiological doses associated with design basis events challenge or exceed dose evaluation guidelines. Unmitigated design basis events that sufficiently challenge dose evaluation guidelines or exceed the guidelines for members of the public or workers, merit selection of safety structures, systems, or components or other controls to prevent or mitigate the hazard. Idaho State University, in collaboration with Idaho National Laboratory, has developed a portable and simple to use software application called SODA (Stochastic Objective Decision-Aide) that stochastically calculatesmore » the radiation dose distribution associated with hypothetical radiological material release scenarios. Rather than producing a point estimate of the dose, SODA produces a dose distribution result to allow a deeper understanding of the dose potential. SODA allows users to select the distribution type and parameter values for all of the input variables used to perform the dose calculation. Users can also specify custom distributions through a user defined distribution option. SODA then randomly samples each distribution input variable and calculates the overall resulting dose distribution. In cases where an input variable distribution is unknown, a traditional single point value can be used. SODA, developed using the MATLAB coding framework, has a graphical user interface and can be installed on both Windows and Mac computers. SODA is a standalone software application and does not require MATLAB to function. SODA provides improved risk understanding leading to better informed decision making associated with establishing nuclear facility material-at-risk limits and safety structure, system, or component selection. It is important to note that SODA does not replace or compete with codes such as MACCS or RSAC; rather it is viewed as an easy to use supplemental tool to help improve risk understanding and support better informed decisions. The SODA development project was funded through a grant from the DOE Nuclear Safety Research and Development Program.« less

  7. Regenerative braking device

    DOEpatents

    Hoppie, Lyle O.

    1982-01-12

    Disclosed are several embodiments of a regenerative braking device for an automotive vehicle. The device includes a plurality of rubber rollers (24, 26) mounted for rotation between an input shaft (14) connectable to the vehicle drivetrain and an output shaft (16) which is drivingly connected to the input shaft by a variable ratio transmission (20). When the transmission ratio is such that the input shaft rotates faster than the output shaft, the rubber rollers are torsionally stressed to accumulate energy, thereby slowing the vehicle. When the transmission ratio is such that the output shaft rotates faster than the input shaft, the rubber rollers are torsionally relaxed to deliver accumulated energy, thereby accelerating or driving the vehicle.

  8. RATIO COMPUTER

    DOEpatents

    Post, R.F.

    1958-11-11

    An electronic computer circuit is described for producing an output voltage proportional to the product or quotient of tbe voltages of a pair of input signals. ln essence, the disclosed invention provides a computer having two channels adapted to receive separate input signals and each having amplifiers with like fixed amplification factors and like negatlve feedback amplifiers. One of the channels receives a constant signal for comparison purposes, whereby a difference signal is produced to control the amplification factors of the variable feedback amplifiers. The output of the other channel is thereby proportional to the product or quotient of input signals depending upon the relation of input to fixed signals in the first mentioned channel.

  9. Expanding the occupational health methodology: A concatenated artificial neural network approach to model the burnout process in Chinese nurses.

    PubMed

    Ladstätter, Felix; Garrosa, Eva; Moreno-Jiménez, Bernardo; Ponsoda, Vicente; Reales Aviles, José Manuel; Dai, Junming

    2016-01-01

    Artificial neural networks are sophisticated modelling and prediction tools capable of extracting complex, non-linear relationships between predictor (input) and predicted (output) variables. This study explores this capacity by modelling non-linearities in the hardiness-modulated burnout process with a neural network. Specifically, two multi-layer feed-forward artificial neural networks are concatenated in an attempt to model the composite non-linear burnout process. Sensitivity analysis, a Monte Carlo-based global simulation technique, is then utilised to examine the first-order effects of the predictor variables on the burnout sub-dimensions and consequences. Results show that (1) this concatenated artificial neural network approach is feasible to model the burnout process, (2) sensitivity analysis is a prolific method to study the relative importance of predictor variables and (3) the relationships among variables involved in the development of burnout and its consequences are to different degrees non-linear. Many relationships among variables (e.g., stressors and strains) are not linear, yet researchers use linear methods such as Pearson correlation or linear regression to analyse these relationships. Artificial neural network analysis is an innovative method to analyse non-linear relationships and in combination with sensitivity analysis superior to linear methods.

  10. Tropical rainforests dominate multi-decadal variability of the global carbon cycle

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Wang, Y. P.; Peng, S.; Rayner, P. J.; Silver, J.; Ciais, P.; Piao, S.; Zhu, Z.; Lu, X.; Zheng, X.

    2017-12-01

    Recent studies find that inter-annual variability of global atmosphere-to-land CO2 uptake (NBP) is dominated by semi-arid ecosystems. However, the NBP variations at decadal to multi-decadal timescales are still not known. By developing a basic theory for the role of net primary production (NPP) and heterotrophic respiration (Rh) on NBP and applying it to 100-year simulations of terrestrial ecosystem models forced by observational climate, we find that tropical rainforests dominate the multi-decadal variability of global NBP (48%) rather than the semi-arid lands (35%). The NBP variation at inter-annual timescales is almost 90% contributed by NPP, but across longer timescales is progressively controlled by Rh that constitutes the response from the NPP-derived soil carbon input (40%) and the response of soil carbon turnover rates to climate variability (60%). The NBP variations of tropical rainforests is modulated by the ENSO and the PDO through their significant influences on temperature and precipitation at timescales of 2.5-7 and 25-50 years, respectively. This study highlights the importance of tropical rainforests on the multi-decadal variability of global carbon cycle, suggesting that we need to carefully differentiate the effect of NBP long-term fluctuations associated with ocean-related climate modes on the long-term trend in land sink.

  11. Modern deposition rates and patterns of organic carbon burial in Fiordland, New Zealand

    NASA Astrophysics Data System (ADS)

    Ramirez, Michael T.; Allison, Mead A.; Bianchi, Thomas S.; Cui, Xingqian; Savage, Candida; Schüller, Susanne E.; Smith, Richard W.; Vetter, Lael

    2016-11-01

    Fjords are disproportionately important for global organic carbon (OC) burial relative to their spatial extent and may be important in sequestering atmospheric CO2, providing a negative climate feedback. Within fjords, multiple locally variable delivery mechanisms control mineral sediment deposition, which in turn modulates OC burial. Sediment and OC sources in Fiordland, New Zealand, include terrigenous input at fjord heads, sediment reworking over fjord-mouth sills, and landslide events from steep fjord walls. Box cores were analyzed for sedimentary texture, sediment accumulation rate, and OC content to evaluate the relative importance of each delivery mechanism. Sediment accumulation was up to 3.4 mm/yr in proximal and distal fjord areas, with lower rates in medial reaches. X-radiograph and 210Pb stratigraphy indicate mass wasting and surface-sediment bioturbation throughout the fjords. Sediment accumulation rates are inversely correlated with %OC. Spatial heterogeneity in sediment depositional processes and rates is important when evaluating OC burial within fjords.

  12. The Impact of Early Social Interactions on Later Language Development in Spanish-English Bilingual Infants

    ERIC Educational Resources Information Center

    Ramírez-Esparza, Nairán; García-Sierra, Adrián; Kuhl, Patricia K.

    2017-01-01

    This study tested the impact of child-directed language input on language development in Spanish-English bilingual infants (N = 25, 11- and 14-month-olds from the Seattle metropolitan area), across languages and independently for each language, controlling for socioeconomic status. Language input was characterized by social interaction variables,…

  13. TRANDESNF: A computer program for transonic airfoil design and analysis in nonuniform flow

    NASA Technical Reports Server (NTRS)

    Chang, J. F.; Lan, C. Edward

    1987-01-01

    The use of a transonic airfoil code for analysis, inverse design, and direct optimization of an airfoil immersed in propfan slipstream is described. A summary of the theoretical method, program capabilities, input format, output variables, and program execution are described. Input data of sample test cases and the corresponding output are given.

  14. MURI: Impact of Oceanographic Variability on Acoustic Communications

    DTIC Science & Technology

    2011-09-01

    multiplexing ( OFDM ), multiple- input/multiple-output ( MIMO ) transmissions, and multi-user single-input/multiple-output (SIMO) communications. Lastly... MIMO - OFDM communications: Receiver design for Doppler distorted underwater acoustic channels,” Proc. Asilomar Conf. on Signals, Systems, and... MIMO ) will be of particular interest. Validating experimental data will be obtained during the ONR acoustic communications experiment in summer 2008

  15. Simple Sensitivity Analysis for Orion Guidance Navigation and Control

    NASA Technical Reports Server (NTRS)

    Pressburger, Tom; Hoelscher, Brian; Martin, Rodney; Sricharan, Kumar

    2013-01-01

    The performance of Orion flight software, especially its GNC software, is being analyzed by running Monte Carlo simulations of Orion spacecraft flights. The simulated performance is analyzed for conformance with flight requirements, expressed as performance constraints. Flight requirements include guidance (e.g. touchdown distance from target) and control (e.g., control saturation) as well as performance (e.g., heat load constraints). The Monte Carlo simulations disperse hundreds of simulation input variables, for everything from mass properties to date of launch. We describe in this paper a sensitivity analysis tool ("Critical Factors Tool" or CFT) developed to find the input variables or pairs of variables which by themselves significantly influence satisfaction of requirements or significantly affect key performance metrics (e.g., touchdown distance from target). Knowing these factors can inform robustness analysis, can inform where engineering resources are most needed, and could even affect operations. The contributions of this paper include the introduction of novel sensitivity measures, such as estimating success probability, and a technique for determining whether pairs of factors are interacting dependently or independently. The tool found that input variables such as moments, mass, thrust dispersions, and date of launch were found to be significant factors for success of various requirements. Examples are shown in this paper as well as a summary and physics discussion of EFT-1 driving factors that the tool found.

  16. Statistical learning from nonrecurrent experience with discrete input variables and recursive-error-minimization equations

    NASA Astrophysics Data System (ADS)

    Carter, Jeffrey R.; Simon, Wayne E.

    1990-08-01

    Neural networks are trained using Recursive Error Minimization (REM) equations to perform statistical classification. Using REM equations with continuous input variables reduces the required number of training experiences by factors of one to two orders of magnitude over standard back propagation. Replacing the continuous input variables with discrete binary representations reduces the number of connections by a factor proportional to the number of variables reducing the required number of experiences by another order of magnitude. Undesirable effects of using recurrent experience to train neural networks for statistical classification problems are demonstrated and nonrecurrent experience used to avoid these undesirable effects. 1. THE 1-41 PROBLEM The statistical classification problem which we address is is that of assigning points in ddimensional space to one of two classes. The first class has a covariance matrix of I (the identity matrix) the covariance matrix of the second class is 41. For this reason the problem is known as the 1-41 problem. Both classes have equal probability of occurrence and samples from both classes may appear anywhere throughout the ddimensional space. Most samples near the origin of the coordinate system will be from the first class while most samples away from the origin will be from the second class. Since the two classes completely overlap it is impossible to have a classifier with zero error. The minimum possible error is known as the Bayes error and

  17. Fuzzy parametric uncertainty analysis of linear dynamical systems: A surrogate modeling approach

    NASA Astrophysics Data System (ADS)

    Chowdhury, R.; Adhikari, S.

    2012-10-01

    Uncertainty propagation engineering systems possess significant computational challenges. This paper explores the possibility of using correlated function expansion based metamodelling approach when uncertain system parameters are modeled using Fuzzy variables. In particular, the application of High-Dimensional Model Representation (HDMR) is proposed for fuzzy finite element analysis of dynamical systems. The HDMR expansion is a set of quantitative model assessment and analysis tools for capturing high-dimensional input-output system behavior based on a hierarchy of functions of increasing dimensions. The input variables may be either finite-dimensional (i.e., a vector of parameters chosen from the Euclidean space RM) or may be infinite-dimensional as in the function space CM[0,1]. The computational effort to determine the expansion functions using the alpha cut method scales polynomially with the number of variables rather than exponentially. This logic is based on the fundamental assumption underlying the HDMR representation that only low-order correlations among the input variables are likely to have significant impacts upon the outputs for most high-dimensional complex systems. The proposed method is integrated with a commercial Finite Element software. Modal analysis of a simplified aircraft wing with Fuzzy parameters has been used to illustrate the generality of the proposed approach. In the numerical examples, triangular membership functions have been used and the results have been validated against direct Monte Carlo simulations.

  18. The method of belief scales as a means for dealing with uncertainty in tough regulatory decisions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pilch, Martin M.

    Modeling and simulation is playing an increasing role in supporting tough regulatory decisions, which are typically characterized by variabilities and uncertainties in the scenarios, input conditions, failure criteria, model parameters, and even model form. Variability exists when there is a statistically significant database that is fully relevant to the application. Uncertainty, on the other hand, is characterized by some degree of ignorance. A simple algebraic problem was used to illustrate how various risk methodologies address variability and uncertainty in a regulatory context. These traditional risk methodologies include probabilistic methods (including frequensic and Bayesian perspectives) and second-order methods where variabilities andmore » uncertainties are treated separately. Representing uncertainties with (subjective) probability distributions and using probabilistic methods to propagate subjective distributions can lead to results that are not logically consistent with available knowledge and that may not be conservative. The Method of Belief Scales (MBS) is developed as a means to logically aggregate uncertain input information and to propagate that information through the model to a set of results that are scrutable, easily interpretable by the nonexpert, and logically consistent with the available input information. The MBS, particularly in conjunction with sensitivity analyses, has the potential to be more computationally efficient than other risk methodologies. The regulatory language must be tailored to the specific risk methodology if ambiguity and conflict are to be avoided.« less

  19. A latent low-dimensional common input drives a pool of motor neurons: a probabilistic latent state-space model.

    PubMed

    Feeney, Daniel F; Meyer, François G; Noone, Nicholas; Enoka, Roger M

    2017-10-01

    Motor neurons appear to be activated with a common input signal that modulates the discharge activity of all neurons in the motor nucleus. It has proven difficult for neurophysiologists to quantify the variability in a common input signal, but characterization of such a signal may improve our understanding of how the activation signal varies across motor tasks. Contemporary methods of quantifying the common input to motor neurons rely on compiling discrete action potentials into continuous time series, assuming the motor pool acts as a linear filter, and requiring signals to be of sufficient duration for frequency analysis. We introduce a space-state model in which the discharge activity of motor neurons is modeled as inhomogeneous Poisson processes and propose a method to quantify an abstract latent trajectory that represents the common input received by motor neurons. The approach also approximates the variation in synaptic noise in the common input signal. The model is validated with four data sets: a simulation of 120 motor units, a pair of integrate-and-fire neurons with a Renshaw cell providing inhibitory feedback, the discharge activity of 10 integrate-and-fire neurons, and the discharge times of concurrently active motor units during an isometric voluntary contraction. The simulations revealed that a latent state-space model is able to quantify the trajectory and variability of the common input signal across all four conditions. When compared with the cumulative spike train method of characterizing common input, the state-space approach was more sensitive to the details of the common input current and was less influenced by the duration of the signal. The state-space approach appears to be capable of detecting rather modest changes in common input signals across conditions. NEW & NOTEWORTHY We propose a state-space model that explicitly delineates a common input signal sent to motor neurons and the physiological noise inherent in synaptic signal transmission. This is the first application of a deterministic state-space model to represent the discharge characteristics of motor units during voluntary contractions. Copyright © 2017 the American Physiological Society.

  20. Smart Frameworks and Self-Describing Models: Model Metadata for Automated Coupling of Hydrologic Process Components (Invited)

    NASA Astrophysics Data System (ADS)

    Peckham, S. D.

    2013-12-01

    Model coupling frameworks like CSDMS (Community Surface Dynamics Modeling System) and ESMF (Earth System Modeling Framework) have developed mechanisms that allow heterogeneous sets of process models to be assembled in a plug-and-play manner to create composite "system models". These mechanisms facilitate code reuse, but must simultaneously satisfy many different design criteria. They must be able to mediate or compensate for differences between the process models, such as their different programming languages, computational grids, time-stepping schemes, variable names and variable units. However, they must achieve this interoperability in a way that: (1) is noninvasive, requiring only relatively small and isolated changes to the original source code, (2) does not significantly reduce performance, (3) is not time-consuming or confusing for a model developer to implement, (4) can very easily be updated to accommodate new versions of a given process model and (5) does not shift the burden of providing model interoperability to the model developers, e.g. by requiring them to provide their output in specific forms that meet the input requirements of other models. In tackling these design challenges, model framework developers have learned that the best solution is to provide each model with a simple, standardized interface, i.e. a set of standardized functions that make the model: (1) fully-controllable by a caller (e.g. a model framework) and (2) self-describing. Model control functions are separate functions that allow a caller to initialize the model, advance the model's state variables in time and finalize the model. Model description functions allow a caller to retrieve detailed information on the model's input and output variables, its computational grid and its timestepping scheme. If the caller is a modeling framework, it can compare the answers to these queries with similar answers from other process models in a collection and then automatically call framework service components as necessary to mediate the differences between the coupled models. This talk will first review two key products of the CSDMS project, namely a standardized model interface called the Basic Model Interface (BMI) and the CSDMS Standard Names. The standard names are used in conjunction with BMI to provide a semantic matching mechanism that allows output variables from one process model to be reliably used as input variables to other process models in a collection. They include not just a standardized naming scheme for model variables, but also a standardized set of terms for describing the attributes and assumptions of a given model. To illustrate the power of standardized model interfaces and metadata, a smart, light-weight modeling framework written in Python will be introduced that can automatically (without user intervention) couple a set of BMI-enabled hydrologic process components together to create a spatial hydrologic model. The same mechanisms could also be used to provide seamless integration (import/export) of data and models.

  1. The use of singular value gradients and optimization techniques to design robust controllers for multiloop systems

    NASA Technical Reports Server (NTRS)

    Newsom, J. R.; Mukhopadhyay, V.

    1983-01-01

    A method for designing robust feedback controllers for multiloop systems is presented. Robustness is characterized in terms of the minimum singular value of the system return difference matrix at the plant input. Analytical gradients of the singular values with respect to design variables in the controller are derived. A cumulative measure of the singular values and their gradients with respect to the design variables is used with a numerical optimization technique to increase the system's robustness. Both unconstrained and constrained optimization techniques are evaluated. Numerical results are presented for a two-input/two-output drone flight control system.

  2. FASTER 3: A generalized-geometry Monte Carlo computer program for the transport of neutrons and gamma rays. Volume 2: Users manual

    NASA Technical Reports Server (NTRS)

    Jordan, T. M.

    1970-01-01

    A description of the FASTER-III program for Monte Carlo Carlo calculation of photon and neutron transport in complex geometries is presented. Major revisions include the capability of calculating minimum weight shield configurations for primary and secondary radiation and optimal importance sampling parameters. The program description includes a users manual describing the preparation of input data cards, the printout from a sample problem including the data card images, definitions of Fortran variables, the program logic, and the control cards required to run on the IBM 7094, IBM 360, UNIVAC 1108 and CDC 6600 computers.

  3. Reactor pressure vessel embrittlement: Insights from neural network modelling

    NASA Astrophysics Data System (ADS)

    Mathew, J.; Parfitt, D.; Wilford, K.; Riddle, N.; Alamaniotis, M.; Chroneos, A.; Fitzpatrick, M. E.

    2018-04-01

    Irradiation embrittlement of steel pressure vessels is an important consideration for the operation of current and future light water nuclear reactors. In this study we employ an ensemble of artificial neural networks in order to provide predictions of the embrittlement using two literature datasets, one based on US surveillance data and the second from the IVAR experiment. We use these networks to examine trends with input variables and to assess various literature models including compositional effects and the role of flux and temperature. Overall, the networks agree with the existing literature models and we comment on their more general use in predicting irradiation embrittlement.

  4. Optimisation of Ferrochrome Addition Using Multi-Objective Evolutionary and Genetic Algorithms for Stainless Steel Making via AOD Converter

    NASA Astrophysics Data System (ADS)

    Behera, Kishore Kumar; Pal, Snehanshu

    2018-03-01

    This paper describes a new approach towards optimum utilisation of ferrochrome added during stainless steel making in AOD converter. The objective of optimisation is to enhance end blow chromium content of steel and reduce the ferrochrome addition during refining. By developing a thermodynamic based mathematical model, a study has been conducted to compute the optimum trade-off between ferrochrome addition and end blow chromium content of stainless steel using a predator prey genetic algorithm through training of 100 dataset considering different input and output variables such as oxygen, argon, nitrogen blowing rate, duration of blowing, initial bath temperature, chromium and carbon content, weight of ferrochrome added during refining. Optimisation is performed within constrained imposed on the input parameters whose values fall within certain ranges. The analysis of pareto fronts is observed to generate a set of feasible optimal solution between the two conflicting objectives that provides an effective guideline for better ferrochrome utilisation. It is found out that after a certain critical range, further addition of ferrochrome does not affect the chromium percentage of steel. Single variable response analysis is performed to study the variation and interaction of all individual input parameters on output variables.

  5. Variability of perceptual multistability: from brain state to individual trait

    PubMed Central

    Kleinschmidt, Andreas; Sterzer, Philipp; Rees, Geraint

    2012-01-01

    Few phenomena are as suitable as perceptual multistability to demonstrate that the brain constructively interprets sensory input. Several studies have outlined the neural circuitry involved in generating perceptual inference but only more recently has the individual variability of this inferential process been appreciated. Studies of the interaction of evoked and ongoing neural activity show that inference itself is not merely a stimulus-triggered process but is related to the context of the current brain state into which the processing of external stimulation is embedded. As brain states fluctuate, so does perception of a given sensory input. In multistability, perceptual fluctuation rates are consistent for a given individual but vary considerably between individuals. There has been some evidence for a genetic basis for these individual differences and recent morphometric studies of parietal lobe regions have identified neuroanatomical substrates for individual variability in spontaneous switching behaviour. Moreover, disrupting the function of these latter regions by transcranial magnetic stimulation yields systematic interference effects on switching behaviour, further arguing for a causal role of these regions in perceptual inference. Together, these studies have advanced our understanding of the biological mechanisms by which the brain constructs the contents of consciousness from sensory input. PMID:22371620

  6. Sequential Modular Position and Momentum Measurements of a Trapped Ion Mechanical Oscillator

    NASA Astrophysics Data System (ADS)

    Flühmann, C.; Negnevitsky, V.; Marinelli, M.; Home, J. P.

    2018-04-01

    The noncommutativity of position and momentum observables is a hallmark feature of quantum physics. However, this incompatibility does not extend to observables that are periodic in these base variables. Such modular-variable observables have been suggested as tools for fault-tolerant quantum computing and enhanced quantum sensing. Here, we implement sequential measurements of modular variables in the oscillatory motion of a single trapped ion, using state-dependent displacements and a heralded nondestructive readout. We investigate the commutative nature of modular variable observables by demonstrating no-signaling in time between successive measurements, using a variety of input states. Employing a different periodicity, we observe signaling in time. This also requires wave-packet overlap, resulting in quantum interference that we enhance using squeezed input states. The sequential measurements allow us to extract two-time correlators for modular variables, which we use to violate a Leggett-Garg inequality. Signaling in time and Leggett-Garg inequalities serve as efficient quantum witnesses, which we probe here with a mechanical oscillator, a system that has a natural crossover from the quantum to the classical regime.

  7. Small-scale temporal and spatial variability in the abundance of plastic pellets on sandy beaches: Methodological considerations for estimating the input of microplastics.

    PubMed

    Moreira, Fabiana Tavares; Prantoni, Alessandro Lívio; Martini, Bruno; de Abreu, Michelle Alves; Stoiev, Sérgio Biato; Turra, Alexander

    2016-01-15

    Microplastics such as pellets have been reported for many years on sandy beaches around the globe. Nevertheless, high variability is observed in their estimates and distribution patterns across the beach environment are still to be unravelled. Here, we investigate the small-scale temporal and spatial variability in the abundance of pellets in the intertidal zone of a sandy beach and evaluate factors that can increase the variability in data sets. The abundance of pellets was estimated during twelve consecutive tidal cycles, identifying the position of the high tide between cycles and sampling drift-lines across the intertidal zone. We demonstrate that beach dynamic processes such as the overlap of strandlines and artefacts of the methods can increase the small-scale variability. The results obtained are discussed in terms of the methodological considerations needed to understand the distribution of pellets in the beach environment, with special implications for studies focused on patterns of input. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Data analytics using canonical correlation analysis and Monte Carlo simulation

    NASA Astrophysics Data System (ADS)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  9. From Heuristic to Mathematical Modeling of Drugs Dissolution Profiles: Application of Artificial Neural Networks and Genetic Programming

    PubMed Central

    Mendyk, Aleksander; Güres, Sinan; Szlęk, Jakub; Wiśniowska, Barbara; Kleinebudde, Peter

    2015-01-01

    The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies. PMID:26101544

  10. From Heuristic to Mathematical Modeling of Drugs Dissolution Profiles: Application of Artificial Neural Networks and Genetic Programming.

    PubMed

    Mendyk, Aleksander; Güres, Sinan; Jachowicz, Renata; Szlęk, Jakub; Polak, Sebastian; Wiśniowska, Barbara; Kleinebudde, Peter

    2015-01-01

    The purpose of this work was to develop a mathematical model of the drug dissolution (Q) from the solid lipid extrudates based on the empirical approach. Artificial neural networks (ANNs) and genetic programming (GP) tools were used. Sensitivity analysis of ANNs provided reduction of the original input vector. GP allowed creation of the mathematical equation in two major approaches: (1) direct modeling of Q versus extrudate diameter (d) and the time variable (t) and (2) indirect modeling through Weibull equation. ANNs provided also information about minimum achievable generalization error and the way to enhance the original dataset used for adjustment of the equations' parameters. Two inputs were found important for the drug dissolution: d and t. The extrudates length (L) was found not important. Both GP modeling approaches allowed creation of relatively simple equations with their predictive performance comparable to the ANNs (root mean squared error (RMSE) from 2.19 to 2.33). The direct mode of GP modeling of Q versus d and t resulted in the most robust model. The idea of how to combine ANNs and GP in order to escape ANNs' black-box drawback without losing their superior predictive performance was demonstrated. Open Source software was used to deliver the state-of-the-art models and modeling strategies.

  11. Projecting the potential evapotranspiration by coupling different formulations and input data reliabilities: The possible uncertainty source for climate change impacts on hydrological regime

    NASA Astrophysics Data System (ADS)

    Wang, Weiguang; Li, Changni; Xing, Wanqiu; Fu, Jianyu

    2017-12-01

    Representing atmospheric evaporating capability for a hypothetical reference surface, potential evapotranspiration (PET) determines the upper limit of actual evapotranspiration and is an important input to hydrological models. Due that present climate models do not give direct estimates of PET when simulating the hydrological response to future climate change, the PET must be estimated first and is subject to the uncertainty on account of many existing formulae and different input data reliabilities. Using four different PET estimation approaches, i.e., the more physically Penman (PN) equation with less reliable input variables, more empirical radiation-based Priestley-Taylor (PT) equation with relatively dependable downscaled data, the most simply temperature-based Hamon (HM) equation with the most reliable downscaled variable, and downscaling PET directly by the statistical downscaling model, this paper investigated the differences of runoff projection caused by the alternative PET methods by a well calibrated abcd monthly hydrological model. Three catchments, i.e., the Luanhe River Basin, the Source Region of the Yellow River and the Ganjiang River Basin, representing a large climatic diversity were chosen as examples to illustrate this issue. The results indicated that although similar monthly patterns of PET over the period 2021-2050 for each catchment were provided by the four methods, the magnitudes of PET were still slightly different, especially for spring and summer months in the Luanhe River Basin and the Source Region of the Yellow River with relatively dry climate feature. The apparent discrepancy in magnitude of change in future runoff and even the diverse change direction for summer months in the Luanhe River Basin and spring months in the Source Region of the Yellow River indicated that the PET method related uncertainty occurred, especially in the Luanhe River Basin and the Source Region of the Yellow River with smaller aridity index. Moreover, the possible reason of discrepancies in uncertainty between three catchments was quantitatively discussed by the contribution analysis based on climatic elasticity method. This study can provide beneficial reference to comprehensively understand the impacts of climate change on hydrological regime and thus improve the regional strategy for future water resource management.

  12. Pollen-Based Inverse Modelling versus Data Assimilation, two Different Ways to Consider Priors in Paleoclimate Reconstruction: Application to the Mediterranean Holocene

    NASA Astrophysics Data System (ADS)

    Guiot, J.

    2017-12-01

    In the last decades, climate reconstruction has much evolved. A important step has been passed with inverse modelling approach proposed by Guiot et al (2000). It is based on appropriate algorithms in the frame of the Bayesian statistical theory to estimate the inputs of a vegetation model when the outputs are known. The inputs are the climate variables that we want to reconstruct and the outputs are vegetation characteristics, which can be compared to pollen data. The Bayesian framework consists in defining prior distribution of the wanted climate variables and in using data and a model to estimate posterior probability distribution. The main interest of the method is the possibility to set different values of exogenous variables as the atmospheric CO2 concentration. The fact that the CO2 concentration has an influence on the photosynthesis and that its level is different between the calibration period (the 20th century) and the past, there is an important risk of biases on the reconstructions. After that initial paper, numerous papers have been published showing the interested of the method. In that approach, the prior distribution is fixed by educated guess of by using complementary information on the expected climate (other proxies or other records). In the data assimilation approach, the prior distribution is provided by a climate model. The use of a vegetation model together with proxy data, enable to calculate posterior distributions. Data assimilation consists in constraining climate model to reproduce estimates relatively close to the data, taking into account the respective errors of the data and of the climate model (Dubinkina et al, 2011). We compare both approaches using pollen data for the Holocene from the Mediterranean. Pollen data have been extracted from the European Pollen Database. The earth system model, LOVECLIM, is run to simulate Holocene climate with appropriate boundary conditions and realistic forcing. Simulated climate variables (temperature, precipitation and sunshine) are used as the forcing parameters to a vegetation model, BIOME4, that calculates the equilibrium distribution of vegetation types and associated phenological, hydrological and biogeochemical properties. BIOME4 output, constrained with the pollen observations, are off-line coupled using a particle filter technique.

  13. Mathematical Model of Solidification During Electroslag Casting of Pilger Roll

    NASA Astrophysics Data System (ADS)

    Liu, Fubin; Li, Huabing; Jiang, Zhouhua; Dong, Yanwu; Chen, Xu; Geng, Xin; Zang, Ximin

    A mathematical model for describing the interaction of multiple physical fields in slag bath and solidification process in ingot during pilger roll casting with variable cross-section which is produced by the electroslag casting (ESC) process was developed. The commercial software ANSYS was applied to calculate the electromagnetic field, magnetic driven fluid flow, buoyancy-driven flow and heat transfer. The transportation phenomenon in slag bath and solidification characteristic of ingots are analyzed for variable cross-section with variable input power under the conditions of 9Cr3NiMo steel and 70%CaF2 - 30%Al2O3 slag system. The calculated results show that characteristic of current density distribution, velocity patterns and temperature profiles in the slag bath and metal pool profiles in ingot have distinct difference at variable cross-sections due to difference of input power and cooling condition. The pool shape and the local solidification time (LST) during Pilger roll ESC process are analyzed.

  14. Prediction of dissolved oxygen concentration in hypoxic river systems using support vector machine: a case study of Wen-Rui Tang River, China.

    PubMed

    Ji, Xiaoliang; Shang, Xu; Dahlgren, Randy A; Zhang, Minghua

    2017-07-01

    Accurate quantification of dissolved oxygen (DO) is critically important for managing water resources and controlling pollution. Artificial intelligence (AI) models have been successfully applied for modeling DO content in aquatic ecosystems with limited data. However, the efficacy of these AI models in predicting DO levels in the hypoxic river systems having multiple pollution sources and complicated pollutants behaviors is unclear. Given this dilemma, we developed a promising AI model, known as support vector machine (SVM), to predict the DO concentration in a hypoxic river in southeastern China. Four different calibration models, specifically, multiple linear regression, back propagation neural network, general regression neural network, and SVM, were established, and their prediction accuracy was systemically investigated and compared. A total of 11 hydro-chemical variables were used as model inputs. These variables were measured bimonthly at eight sampling sites along the rural-suburban-urban portion of Wen-Rui Tang River from 2004 to 2008. The performances of the established models were assessed through the mean square error (MSE), determination coefficient (R 2 ), and Nash-Sutcliffe (NS) model efficiency. The results indicated that the SVM model was superior to other models in predicting DO concentration in Wen-Rui Tang River. For SVM, the MSE, R 2 , and NS values for the testing subset were 0.9416 mg/L, 0.8646, and 0.8763, respectively. Sensitivity analysis showed that ammonium-nitrogen was the most significant input variable of the proposal SVM model. Overall, these results demonstrated that the proposed SVM model can efficiently predict water quality, especially for highly impaired and hypoxic river systems.

  15. Export and losses of blue carbon-derived particulate and dissolved organic carbon (POC and DOC) in blackwater river-dominated and particle-dominated estuaries

    NASA Astrophysics Data System (ADS)

    Arellano, A. R.; Bianchi, T. S.; Osburn, C. L.; D'Sa, E. J.; Oviedo Vargas, D.; Ward, N. D.; Joshi, I.; Ko, D. S.

    2016-12-01

    Globally, coastal blue carbon environments (wetlands, seagrass beds and mangroves) sequester an estimated 67-215 Tg C yr-1. While most blue carbon research has focused on carbon burial/stocks and habitat fragmentation of these communities, few studies have examined the export and loss of blue carbon sources of particulate organic matter (POM) and dissolved organic matter (DOM) to adjacent coastal waters. These shifts in losses of DOM and POM are also partly due to large-scale changes in land-use and climate change. Due to the complexity of vascular plant inputs to estuarine systems (e.g. terrestrial vs. blue carbon), being able to separate blue carbon sources of POM and DOM are critical. Here, we investigate the temporal variability of the abundance, sources and breakdown of particulate and dissolved organic carbon (POC and DOC) in particle-dominated (Barataria Bay) and blackwater river-dominated (Apalachicola Bay) estuaries in the northern Gulf of Mexico, using bulk carbon, dissolved lignin phenols, δ13C and dissolved CO2. The range of DOC:POC ratios for Barataria and Apalachicola bays were 0.5-3.1 and 2.3-57.0, respectively. δ13C-POC values were more depleted in Apalachicola (x̅=-27.3‰) compared to those in Barataria (x̅=-24.8‰), and C:N ratios were higher in Apalachicola (x̅=10.8) than in Barataria (x̅=9.3). Although there was no significant temporal variability with δ13C-POC in both systems, Barataria Bay had the highest POC (0.08-0.23 mM) and C:N (7.0-13.4) values during spring, when enhanced southerly winds likely resulted in higher resuspension and marsh erosion rates. Additionally, in Apalachicola, the lowest C:N values (6.2-16.1) were observed during the dry season when fluvial DOM inputs were minimal. The highest dissolved lignin phenol and DOC (0.10-2.98 mM) concentrations in Apalachicola occurred during the wet season, reflecting the importance of riverine inputs to this system. In particular, the Carabelle River plume region had C:V and S:V values that indicated woody inputs (long-leaf pine communities), while the bay proper/East Bay were more indicative of blue carbon sources. Spatial and temporal variability of dissolved CO2 concentrations will be discussed as it relates to possible linkages with the export and losses of blue carbon-derived DOC and POC.

  16. Collective feature selection to identify crucial epistatic variants.

    PubMed

    Verma, Shefali S; Lucas, Anastasia; Zhang, Xinyuan; Veturi, Yogasudha; Dudek, Scott; Li, Binglan; Li, Ruowang; Urbanowicz, Ryan; Moore, Jason H; Kim, Dokyoon; Ritchie, Marylyn D

    2018-01-01

    Machine learning methods have gained popularity and practicality in identifying linear and non-linear effects of variants associated with complex disease/traits. Detection of epistatic interactions still remains a challenge due to the large number of features and relatively small sample size as input, thus leading to the so-called "short fat data" problem. The efficiency of machine learning methods can be increased by limiting the number of input features. Thus, it is very important to perform variable selection before searching for epistasis. Many methods have been evaluated and proposed to perform feature selection, but no single method works best in all scenarios. We demonstrate this by conducting two separate simulation analyses to evaluate the proposed collective feature selection approach. Through our simulation study we propose a collective feature selection approach to select features that are in the "union" of the best performing methods. We explored various parametric, non-parametric, and data mining approaches to perform feature selection. We choose our top performing methods to select the union of the resulting variables based on a user-defined percentage of variants selected from each method to take to downstream analysis. Our simulation analysis shows that non-parametric data mining approaches, such as MDR, may work best under one simulation criteria for the high effect size (penetrance) datasets, while non-parametric methods designed for feature selection, such as Ranger and Gradient boosting, work best under other simulation criteria. Thus, using a collective approach proves to be more beneficial for selecting variables with epistatic effects also in low effect size datasets and different genetic architectures. Following this, we applied our proposed collective feature selection approach to select the top 1% of variables to identify potential interacting variables associated with Body Mass Index (BMI) in ~ 44,000 samples obtained from Geisinger's MyCode Community Health Initiative (on behalf of DiscovEHR collaboration). In this study, we were able to show that selecting variables using a collective feature selection approach could help in selecting true positive epistatic variables more frequently than applying any single method for feature selection via simulation studies. We were able to demonstrate the effectiveness of collective feature selection along with a comparison of many methods in our simulation analysis. We also applied our method to identify non-linear networks associated with obesity.

  17. Impacts and managerial implications for sewer systems due to recent changes to inputs in domestic wastewater - A review.

    PubMed

    Mattsson, Jonathan; Hedström, Annelie; Ashley, Richard M; Viklander, Maria

    2015-09-15

    Ever since the advent of major sewer construction in the 1850s, the issue of increased solids deposition in sewers due to changes in domestic wastewater inputs has been frequently debated. Three recent changes considered here are the introduction of kitchen sink food waste disposers (FWDs); rising levels of inputs of fat, oil and grease (FOG); and the installation of low-flush toilets (LFTs). In this review these changes have been examined with regard to potential solids depositional impacts on sewer systems and the managerial implications. The review indicates that each of the changes has the potential to cause an increase in solids deposition in sewers and this is likely to be more pronounced for the upstream reaches of networks that serve fewer households than the downstream parts and for specific sewer features such as sags. The review has highlighted the importance of educational campaigns directed to the public to mitigate deposition as many of the observed problems have been linked to domestic behaviour in regard to FOGs, FWDs and toilet flushing. A standardized monitoring procedure of repeat sewer blockage locations can also be a means to identify depositional hot-spots. Interactions between the various changes in inputs in the studies reviewed here indicated an increased potential for blockage formation, but this would need to be further substantiated. As the precise nature of these changes in inputs have been found to be variable, depending on lifestyles and type of installation, the additional problems that may arise pose particular challenges to sewer operators and managers because of the difficulty in generalizing the nature of the changes, particularly where retrofitting projects in households are being considered. The three types of changes to inputs reviewed here highlight the need to consider whether or not more or less solid waste from households should be diverted into sewers. Copyright © 2015 Elsevier Ltd. All rights reserved.

  18. Application of experimental design for the optimization of artificial neural network-based water quality model: a case study of dissolved oxygen prediction.

    PubMed

    Šiljić Tomić, Aleksandra; Antanasijević, Davor; Ristić, Mirjana; Perić-Grujić, Aleksandra; Pocajt, Viktor

    2018-04-01

    This paper presents an application of experimental design for the optimization of artificial neural network (ANN) for the prediction of dissolved oxygen (DO) content in the Danube River. The aim of this research was to obtain a more reliable ANN model that uses fewer monitoring records, by simultaneous optimization of the following model parameters: number of monitoring sites, number of historical monitoring data (expressed in years), and number of input water quality parameters used. Box-Behnken three-factor at three levels experimental design was applied for simultaneous spatial, temporal, and input variables optimization of the ANN model. The prediction of DO was performed using a feed-forward back-propagation neural network (BPNN), while the selection of most important inputs was done off-model using multi-filter approach that combines a chi-square ranking in the first step with a correlation-based elimination in the second step. The contour plots of absolute and relative error response surfaces were utilized to determine the optimal values of design factors. From the contour plots, two BPNN models that cover entire Danube flow through Serbia are proposed: an upstream model (BPNN-UP) that covers 8 monitoring sites prior to Belgrade and uses 12 inputs measured in the 7-year period and a downstream model (BPNN-DOWN) which covers 9 monitoring sites and uses 11 input parameters measured in the 6-year period. The main difference between the two models is that BPNN-UP utilizes inputs such as BOD, P, and PO 4 3- , which is in accordance with the fact that this model covers northern part of Serbia (Vojvodina Autonomous Province) which is well-known for agricultural production and extensive use of fertilizers. Both models have shown very good agreement between measured and predicted DO (with R 2  ≥ 0.86) and demonstrated that they can effectively forecast DO content in the Danube River.

  19. Past variability of the Mexican Monsoon from ultrahigh resolution records in the Gulf of California for the last 6 Ka

    NASA Astrophysics Data System (ADS)

    Herguera, J.; Nava, C.; Hangsterfer, A.

    2013-05-01

    The Mexican monsoon is part of the larger North American Monsoon regime results from an interplay between the ocean, atmosphere and continental topography though there is an ongoing debate as to the relative importance of sea surface temperatures (SSTs) in the NE tropical Pacific warm water lens region, solar radiation variability, land snow cover and soil moisture over the Western North America mountain ranges and the strength and spatial patterns of the dominant winds. The links between these factors and the monsoonal variability appear to be of variable importance during the short instrumental record. This hampers any prediction on the future evolution of the climatic regime in a warming climate. The terrigenous component in very-high sedimentation rate sediments on the margin of the Gulf of California links monsoonal precipitation patterns on land with the varying importance of the lithogenic component in this margin sediments. The relatively high importance of the lithogenic component (>80%) of these sediments attests to the fidelity of this repository to the terrigenous input to this margin environment. Here we use the elemental composition of these margin sediments, as a proxy for the lithogenic component in a collection of box and kasten cores from Pescadero basin. This basin located in the southeastern region of the Gulf of California (24N, 108W) shows a strong tropical influence during the summer, as part of the northernmost extension of the eastern tropical Pacific warm water lens region. A period when the southwestern winds bring moist air masses inland enhancing the monsoonal rains on the eastern reaches of Sierra Madre Occidental. Here we present some new XRF results where we explore the relationships between different elemental ratios in these sediments and the available historical record and several paleo-reconstructions to evaluate the possible links between external forcings and internal feedback effects, to explain the evolution of the monsoon in this region.

  20. Seaglider surveys at Ocean Station Papa: Diagnosis of upper-ocean heat and salt balances using least squares with inequality constraints

    NASA Astrophysics Data System (ADS)

    Pelland, Noel A.; Eriksen, Charles C.; Cronin, Meghan F.

    2017-06-01

    Heat and salt balances in the upper 200 m are examined using data from Seaglider spatial surveys June 2008 to January 2010 surrounding a NOAA surface mooring at Ocean Station Papa (OSP; 50°N, 145°W). A least-squares approach is applied to repeat Seaglider survey and moored measurements to solve for unknown or uncertain monthly three-dimensional circulation and vertical diffusivity. Within the surface boundary layer, the estimated heat and salt balances are dominated throughout the surveys by turbulent flux, vertical advection, and for heat, radiative absorption. When vertically integrated balances are considered, an estimated upwelling of cool water balances the net surface input of heat, while the corresponding large import of salt across the halocline due to upwelling and diffusion is balanced by surface moisture input and horizontal import of fresh water. Measurement of horizontal gradients allows the estimation of unresolved vertical terms over more than one annual cycle; diffusivity in the upper-ocean transition layer decreases rapidly to the depth of the maximum near-surface stratification in all months, with weak seasonal modulation in the rate of decrease and profile amplitude. Vertical velocity is estimated to be on average upward but with important monthly variations. Results support and expand existing evidence concerning the importance of horizontal advection in the balances of heat and salt in the Gulf of Alaska, highlight time and depth variability in difficult-to-measure vertical transports in the upper ocean, and suggest avenues of further study in future observational work at OSP.

Top